Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Why Teaching AI to Play Games Is Important

Games have proven to be an important part of AI research. From chess to Dota 2, every time AI has conquered a game, it's helped us break new ground in computer science and other fields.

July 24, 2018
Why Teaching AI to Play Games Is Important

OpenAI, the artificial intelligence research lab founded by Sam Altman and Elon Musk, recently declared that it would be sending a team to Vancouver in August to participate in a professional tournament of the famous online battle game Dota 2. But unlike other teams that will be competing for the multi-million-dollar prize, OpenAI's team will involve no humans—at least, not directly.

Opinions Called OpenAI Five, the team consists of five artificial neural networks that have been burning through the huge computing power of Google's cloud and practicing the game over and over, millions of times. OpenAI Five has already bested semi-pros at Dota 2 and will be testing its mettle against the top 1 percent of players come August.

At first glance, spending expensive computing resources and scarce AI talent to teach AI to play games might seem irresponsible. OpenAI houses some of the world's top AI scientists, who, according to The New York Times, earn seven-figure salaries. After all, can't they work on more important problems, such as developing AI that can fight cancer or make self-driving cars safer?

Absurd as it may seem to some, games have proven to be an important part of AI research. From chess to Dota 2, every game AI has conquered has helped us break new ground in computer science and other fields.

Games Help Trace the Progress of AI

Since the inception of the idea of artificial intelligence in the 1950s, games have been an efficient way to measure the capacity of AI. They're especially convenient in testing the capacity of new AI techniques, because you can quantify the performance of AI with numeric scores and win-lose outcomes and compare it against humans or other AI.

The first game that researchers tried to master through AI was chess, which in early days was considered the ultimate test of advances in the field. In 1996, IBM's Deep Blue was the first computer to defeat a world champion (Garry Kasparov) in chess. The AI behind Deep Blue used a brute-force method that analyzed millions of sequences before making a move.

While the method enabled Deep Blue to master chess, it was nowhere near effective enough to tackle more complicated board games. By today's standards, it's considered crude. When Deep Blue defeated Kasparov, a scientist remarked that it would take another hundred years before AI could conquer the ancient Chinese game of Go, which has more possible moves than the number of atoms in the universe.

But in 2016, researchers at Google-owned AI company DeepMind created AlphaGo, a Go-playing AI that beat Lee Sedol, the world champion, 4 to 1 in a five-game competition. AlphaGo replaced the brute-force method of Deep Blue with deep learning, an AI technique that works in a much more similar way to how the human brain works. Instead of examining every possible combination, AlphaGo examined the way humans played Go, then tried to figure out and replicate successful gameplay patterns.

The researchers of DeepMind later created AlphaGo Zero, an improved version of AlphaGo that used reinforcement learning, a method that required zero human input. AlphaGo Zero was taught the basic rules of Go and learned the game by playing against itself countless times. And AlphaGo Zero beat its predecessor 100 to zero.

Board games have limitations, though. First, they are turn-based, which means the AI isn't under the strain to make decisions in an environment that changes constantly. Second, the AI has access to all the information in the environment (in this case the board) and doesn't have to make guesses or take risks based on unknown factors.

Considering this, an AI called Libratus made the next breakthrough in artificial intelligence research by beating the best players at Texas Hold 'Em poker. Developed by researchers at Carnegie Mellon, Libratus showed that AI can compete with humans in situations where it has access to partial information. Libratus used several AI techniques to learn poker and improve its gameplay as it examined the tactics of its human opponents.

Real-time video games are the next frontier for AI, and OpenAI isn't the only organization involved in the field. Facebook has tested teaching AI to play the real-time strategy game StarCraft, and DeepMind has developed an AI that can play the first-person shooter game Quake III. Each game presents its own set of challenges, but the common denominator is that all of them present the AI with environments where they have to make decisions in real time and with incomplete information. Moreover, they give AI an arena where it can test its might against a team of opponents and learn teamwork itself.

For now, no one had developed AI that can beat professional players. But the very fact that AI is competing with humans at such complex games shows how far we've come in the field.

Games Help Develop AI in Other Fields

While scientists have used games as testbeds for developing new AI techniques, their achievements have not remained limited to games. In fact, gameplaying AIs have paved the way for innovations in other fields.

In 2011, IBM introduced a supercomputer that was capable of natural language processing and generation (NLG/NLP) and was named after the company's former CEO Thomas J Watson. The computer played the famous TV-show quiz game Jeopardy against two of the world's best players and won. Watson later became the basis for a huge line of AI services by IBM in different domains including healthcare, cybersecurity, and weather forecasting.

DeepMind is employing its experience from developing AlphaGo to use AI in other fields where reinforcement learning can help. The company launched a project with National Grid UK to use the AlphaGo's smarts to improve the efficiency of the British power grid. Google, DeepMind's parent company, is also employing the technique to slash the electricity costs of its huge data centers by automating the consumption control of its different hardware. Google is also using reinforcement learning to train robots that will one day handle objects in factories.

Libratus, the poker-playing AI, might help develop the kind of algorithms that can help in various situations such as political negotiations and auctions, where the AI has to take risks and make short-term sacrifices for long-term gains.

I for one am looking forward to seeing how OpenAI Five will perform in August's Dota 2 competition. While I'm not particularly interested in whether the neural networks and its developers take home the $15 million prize, I'm keen to see what new windows its accomplishments will open.

Talking Watson With IBM's Rob High
PCMag Logo Talking Watson With IBM's Rob High

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Ben Dickson

Ben Dickson

Ben Dickson is a software engineer and tech blogger. He writes about disruptive tech trends including artificial intelligence, virtual and augmented reality, blockchain, Internet of Things, and cybersecurity. Ben also runs the blog TechTalks. Follow him on Twitter and Facebook.

Read Ben's full bio

Read the latest from Ben Dickson