In a first, an AI taught itself to play a video game, and is beating humans



Since the earliest days of virtual chess and solitaire, video games have been a playing field for developing artificial intelligence (AI). Each victory of machine against human has helped make algorithms smarter and more efficient. But in order to tackle real world problems – such as automating complex tasks including driving and negotiation – these algorithms must navigate more complex environments than board games, and learn teamwork. Teaching AI how to work and interact with other players to succeed had been an insurmountable task – until now.

In a new study, researchers detailed a way to train AI algorithms to reach human levels of performance in a popular 3D multiplayer game – a modified version of Quake III Arena in Capture the Flag mode.

Even though the task of this game is straightforward – two opposing teams compete to capture each other’s flags by navigating a map – winning demands complex decision-making and an ability to predict and respond to the actions of other players.

This is the first time an AI has attained human-like skills in a first-person video game. So how did the researchers do it?

The robot learning curve
In 2019, several milestones in AI research have been reached in other multiplayer strategy games. Five “bots” – players controlled by an AI – defeated a professional e-sports team in a game of DOTA 2. Professional human players were also beaten by an AI in a game of StarCraft II. In all cases, a form of reinforcement learning was applied, whereby the algorithm learns by trial and error and by interacting with its environment.
Read Complete Article

Comments

Popular posts from this blog

Infinix Smart 2 review: 'Value for money' smartphone with tall 18:9 screen

Year in review: From OnePlus to Asus, best midrange flagship phones of 2019

OnePlus 8 review: Meaningful innovations elevate experience, justify price