Thursday, May 30, 2019

DeepMind AI Reaches 'Human-Level Performance' In Modded 'Quake Arena III'

Researchers at Alphabet subsidiary DeepMind have recently taught artificial intelligence to play video games such as StarCraft II, but hadn’t yet tackled a first-person 3D game. Now, the machine learning research firm says that its AI has achieved “human-level performance” in a modded version of Quake III Arena, the online shooter released by id Software in 1999.

It’s hard to create AI that can beat human players at games; AI in video games has made progress, but it’s very restricted and purposefully not akin to playing against a human. DeepMind’s programs have changed that idea in recent years. For example, a DeepMind-developed AI called AlphaStar trounced two professional StarCraft II in January, though under restricted conditions. But DeepMind isn’t necessarily interested in using this technology to make games more fun, or more difficult. Instead, it’s using the digital world of Quake III Arena to teach AI to mimic human behavior in the real world.

“The real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents,” DeepMind wrote in the paper, which was published in Science on Thursday.

The team chose a modified version of Quake III Arena’s “Capture the Flag” mode—in which two teams must compete to capture the most flags in five minutes—because its AI agents must not only contend with taking on opponents, but navigating an environment and scoring points.

DeepMind’s version of Quake III Arena isn’t the same game you played in the age of Tamagotchis and JNCO jeans. Instead, it’s a modded version of the game using Quake III Arena maps. There’s no guns or human models. Instead, the AI players are little balls that move through simplified maps. DeepMind said in its blog post that “all game mechanics remain the same.” Instead of shooting at enemy players, the AI agents “tag” each other to send them back to base to respawn.

1559239582832-jaderberg1HR
Illustration of AI agents playing 'Quake III Arena.' Credit: DeepMind

Crucially, the AI agents in DeepMind’s Quake III Arena study had no access to game information that a human player wouldn’t, and didn’t learn from each other. Instead, they learned independently from pixel data and a game score. This resulted in “decentralized control within a team” of AI agents, according to the paper.

“What makes these results so exciting is that these agents perceive their environment from a first-person perspective, just as a human player would,” DeepMind research scientist Thore Graepel said in a statement. “In order to learn how to play tactically and collaborate with their teammates, these agents must rely on feedback from the game outcomes - without any teacher or coach showing them what to do.”

The system worked. DeepMind used a tournament system with its modded version of Quake III Arena that pitted 40 human players against the company’s trained AI to calculate the AI’s skill levels. Some agents were able to surpass even highly-skilled humans’ win-rates. This was documented using the Elo rating system, which is a rating system that ranks players based on win probability.

“Our agents’ superior performance might be a result of their faster visual processing and motor control,” DeepMind wrote on the blog. “However, by artificially reducing this accuracy and reaction time, we saw that this was only one factor in their success.”

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.



from VICE http://bit.ly/2KgJbQ7
via cheap web hosting

No comments:

Post a Comment