by Goran Damnjanovic, Gaming Columnist
Published in Gaming on 20th June, 2018
Artificial Intelligence in video games is almost as old as video games themselves. It wasn't a part of the hoariest titles that were simple games where two players would battle against one another like we saw in the first game of them all, Pong, or in simple single player games where players battled with themselves like in Arkanoid and Tetris.
So far even then most tenuous excuse for artificial intelligence had been unnecessary. But then, Space Invaders came and that game brought a revolution to our favorite pastime. Firstly, it was the first game to offer multi-colored presentation (achievable by using colored, see-through tape that made aliens look different). And secondly, the game brought live, movable, and deadly opponents in a game where death wasn't just a result of player's bad choices or slow reflexes, but also the behavior of enemies. Space invaders gave birth to enemy AI in games.
That was 40 years ago and since then video game developers managed to massively upgrade AI we see in games, to the point that some games feature enemy AI that's so smart most players consider it dumb because it uses advanced tactics that are unaware to the player fighting those artificial enemies. But, on the other side, we have the fact that in the last decade it seems enemy AI took the backseat when it comes to video game development. We have better visuals, richly produced original scores and a famous cast of actors borrowing their likeness and voices, making games more cinematic experiences, but it seems we are stuck with the same old enemy AI routines that seem to repeat the same tactics and procedures over and over. Is this really the case? What enemy (and not just enemy) AI in video games really is? Is there a way to make intelligent video game AI that will challenge players but won't frustrate them? And how enemy AI (d)evolved since Space Invaders first showed enemies capable of killing us on their own? We will (try to) answer these, and a couple of more questions, in this piece.
Since most of us are at least briefly familiar with what artificial intelligence really is about, we can debunk the biggest misunderstanding about video game enemy AI right away and say that it isn't AI at all. Artificial intelligence, in general, is a computer program based on algorithms with a primary function of solving problems and creating solutions with a minimal interference from humans. Programmers write algorithms, they provide the AI with data it feeds on, and then let it to develop on itself. The AI program does that by either learning based on available data (like millions of images used to train AI to recognize different types of objects) or by learning based on experience (like letting an AI play millions of GO games, ultimately becoming so good to beat the best human GO player).
But, in video games, developers cannot simply write an AI routine and let it evolve by itself because video games are all about the players and their experience. The experience should be fun, challenging to a degree and a bit predictable to allow players to learn how to play the game, and deal with NPCs. So, instead of creating a proper AI program, video game developers create an instance that can be described more as an "artificial behavior" program than artificial intelligence. AI in video games is mostly based on scripts.
For instance, AI in Space Invaders was very simplistic. Aliens would move left or right, and the whole pack would let go of one of its members to fly towards the player. This was based on chance, with the speed of movement and chance to break from the group and lunge toward the player becoming greater on higher levels. Games like Pac-man brought more advanced AI routines, with varying behavior by different enemies (ghosts would show different speed and level of aggressiveness) but they were still limited with just a few simple scripts. Games like the original Metal Gear brought revolution to video game AI routines by introducing enemies capable of hearing player movement, noticing gunshots, and behaving based on a lot more variables than AI in other games of the time.
During the nineties, video game developers produced an AI routine that quickly became the most popular way of programming NPC characters in video games. Dubbed "Finite State Machine," this algorithm was used in programming, but it wasn't incorporated in video games before the nineties. This type of algorithm is excellent for video games because it allows for simplistic behavior based on just a few variables but it can look intelligent. Finite State Machine algorithm works by giving an AI enemy a few basic states and then letting it behave based on a couple of "if...then..." statements. For instance, if an enemy sees the player it can go into cover and start shooting, or start running towards the player if it wields a melee weapon. A more advanced version of this AI algorithm can issue a few possible outcomes to each state. In this case, an enemy can do more than one thing after spotting the player. It can go into cover and start shooting, or go into cover and throw a grenade, or it can call for backup. This way, the AI provides a bigger variance in its behavior, creating a more varied and more fun gaming experience. But Finite State Machine is just one type of AI used in video games.
Finite State Machine is a hugely popular way to program video game AI, but it is far from being the only one out there. This type of AI algorithm is used in most FPS and TPS games, most action adventures and most RPG titles. Creating a complex FSM algorithm could make a game more varied and fun to play, but most of the games utilizing this type of enemy AI are pretty predictable and after a while combat in them tends to become boring. This is why this type of enemy AI scripting saw many revisions, each adding to its diversity and each making enemies to appear smarter. The first Half-Life, for instance, gave enemies voice lines that activated once a decision has been made. When a soldier decided he should go into cover, he would yell something like "going into cover!" and when enemies decide it is time to flush the player out of hiding they would yell "throwing a grenade!"
This, in combination with squad AI, which allowed for members of the squad to communicate and coordinate their attack was, and still is, one of the best examples of how Finite State Machine algorithm could be incorporated to provide challenging, varied, and incredibly fun experience with enemies looking like intelligent entities without actually showing intelligent behavior (their behavior was still guided by "if...then" statements). Combine lots of potential states with squad communication and add a chance for each state to trigger (like five percent for an enemy to rush towards the player, ten percent to stand and shoot, twenty-five percent to look for cover, etc.) and you can make players believe that enemies show intelligent behavior.
Since strategy games are more complex than shooters when it comes to strategy and ways of defeating enemies, AI in those games is based on more complex algorithms. The primary algorithm used in strategy games is called Monte Carlo Search Tree (MCST). This algorithm was utilized in Deep Blue AI program that defeated Garry Kasparov in chess back in 1997. The MCST can solve problems by performing random trials. It calculates possible moves and discovers gains from each of the potential moves, ultimately picking one with the biggest gain. But since we don't own supercomputers in our living rooms, MCST algorithm used in strategy games is a simpler version of the one used by Deep Blue, the one that doesn't consider all possible moves, just a couple of thousands of them. It then picks ones that give the best payback.
In turn-based strategies, this algorithm is used to decide how an enemy AI will play its game. For instance, it can decide to start researching new technology during the next turn if it has enough research points or to build a new unit if it doesn't have lots of soldiers. In Civilization, different enemies are given different base behaviors (aggressive, expanding by culture, scientific, etc.) and they behave based on these parameters combined with MCST algorithm. In RTS games, MCST is combined with Finite State Machine to allow enemies to act in real time, but that does mean that in most games enemy AI sees the player and their army, base, and resources.
The third type is a bit more advanced. It is based on planning, and the first game to utilize it was the famous F.E.A.R, and first-person shooter that still holds the throne of the best enemy AI in games ever. The premise is quite simple: give enemy actors two sets of instructions (goals and possible actions) and let them do the thinking. Instead of locking certain behaviors to certain parameters, thus making enemy AI behavior extremely limited and tied to a strict set of "if...then" states. Instead, by using free-form planning algorithm enemies could show an extremely varied behavior that was perceived as intelligent, and to adapt to new situations. For instance, they would be able to flank the player from all sides that offer passable routes. They could also use grenades to make the player return into their line of sight. Planning, combined with highly aggressive behavior and squad communication gave soldiers from F.E.A.R unmatched level of realism that still is analyzed by many video game developers.
And finally, we have learning or emergent AI type. This one learns from player actions and can be trained to use certain actions more frequent. This type of AI was utilized in Black and White I and II where your pet animal would learn on prize and punishment mechanics. It evolved through the course of the game, becoming aggressive, nonviolent, or something in between, For instance, you could pet it if it ate a citizen of your island, but punish it when it showed aggressiveness towards enemy's pet, ultimately becoming a nonviolent animal that chews on humans from time to time.
Now that we covered the basics, let's talk a bit about the evolution of AI in video games and reasons for its apparent stagnation during the last decade.
We already covered some of them, like the first game to utilize variable behavior (Pac-Man) or the first game that introduced player awareness (Metal Gear). Squad tactics were first used in Half-Life as well as voice cues about future enemy actions, which made the experience much more enjoyable and dynamic. Black and White showed how AI in video games can evolve, how it can learn from past actions and become a unique entity based on prize and punishment mechanic. You could create your own unique pet animal in Black and White games, they evolved their behavior patterns based on your impact.
Halo brought stuff like cool small details like enemies throwing grenades back at the player, grunts running away when Master Chief kills their leader. Further, the decision tree, an algorithm on which enemy behavior in Halo is based, became extremely popular in video games industry once Bungie introduced the technique. Enemies in Halo also used cover better than any other NPC enemies in any other game, popularizing cover-based enemy behavior.
Next, we had games like Total War and Warhammer 40K: Dawn of War that introduced emotion-based mechanics. Enemies would feature emotion scripts, and when the player started to gain an upper hand in battle enemy soldiers would start to flee and to become much less effective in combat. Dawn of War did this with morale stat, that was used to increase or decrease attack values both in enemies as in player-controlled units. On top of that, Dawn of War also greatly improved enemy AI behavior by creating covers in game, and by utilizing simplistic but very aggressive AI routine that provided a tough but fair challenge to human players.
F.E.A.R introduced planning AI that features a sort of free will system in which enemies can bring decisions based on many different variables. This algorithm, known as STRIPS planner, allowed enemies to base their behavior based on context of each gunfight, providing an unprecedented level of variation along with advanced tactics like flushing the player with grenades, always trying to flank, creating cover by knocking down tables, and many more context-sensitive behaviors that were replicated in many modern games.
Then we have Rockstar and their RAGE engine that debuted in GTA IV. The engine allows emotional responses from NPCs, introduces extremely complex traffic simulation and allows for a huge number of NPCs to function in the world. And when it comes to advanced tactics in an open world game than we have to talk about Far Cry 2. The game made many milestones and one of them was allowing for enemy AI to function in a huge open world shooter. Enemies would use cover based on their surroundings, would communicate and create on-the-go tactics, were capable to revive downed buddies while providing cover fire, and did many other cool things.
But since GTA IV and Far Cry 2 came out, in 2008, we didn't saw any massive leap when it comes to the evolution of video game AI. Only a handful of games created advanced AIs that managed to amaze players and to show something new. Xenomorph in Alien: Isolation is one example, an AI that searches through levels at all times, capable of knowing where the player might hide. For instance, the Alien knows potential hiding sports and it wanders through a level searching for the player. Once it hears movement, it advances towards it while searching lockers, hidden sports, and other hiding places, until it locates the player or until it realizes that there's no one to be found in the vicinity.
The second recent example that showed how video game AI can still evolve in new ways is Drivatar system from Forza games. This system uses neural networks to learn about driving of each player of the game, learning by analyzing driving style, speed, aggression, and other variables. It is like learning how someone drives by watching their ghost car lap times. Over time, Drivatar learns to emulate the driving style of a particular player, all with their common mistakes, level of aggression, skill when attacking corners, and more. Then, Drivatar AIs are used in single player races to offer human-like opponents, which is a great accomplishment. But this way of emulating human drivers has two big downsides. Firstly, these Drivatars aren't behaving like professional racers because they are based on regular Joes, on players who never went on a racetrack in real life. And secondly, they are flawed in a way that they suck in recognizing the player on the track, behaving like they are autistic like the player isn't there. This leads to the break of immersion when you see that opponents don't slow down when you overtake them or don't react to your mistakes.
But during the last decade, sans those few bright examples, the evolution of video game AI stagnated; even started to deteriorate. What happened?
During the last decade, many gamers noticed that enemies in video games became well, dumber. Instead of building upon advancements of the past, modern games either recycle old ideas or try combining them, thus preventing developers to create something new. In modern video games industry, the majority of money is pumped into creating better graphics, animation, voice over, soundtrack, and into promoting games. Under-the-hood elements such as physics, AI, and creating a living and breathing in-game world are often put to the background. Because of this video games that came out during the last decade (at least most of them) don't offer anything new when it comes to the AI.
But maybe we, the players, are to be blamed for this? In this interesting confession from a video game developer, we can read how gamers reacted negatively to an advanced enemy AI capable of using bait tactics, luring players into shooting at one enemy while others would flank them. Players thought that NPCs were cheating and that the AI was pretty dumb. That's one way to look at it; gaming became massively popular and once something is more popular than before, it means that it has to adapt to the masses. And masses don't want challenging AI. Masses want experiences that are exciting, filled with interesting adventures, but those that are relatively easy. This is why niche of super-hard games like Dark Souls became so popular. Hardcore gamers love a challenge and they embraced Souls games because these days there aren't many games that offer that level of challenge. If developers created a smart AI that would mean that it would be pretty challenging even on lower difficulty levels, and that would mean more frustrating experience, and no one wants that.
This is why we don't see an AI based on machine learning, the most popular AI training technique of today. Machine learning has been used with your pet AI in Black and White but using it with classic enemy NPCs could prove to be catastrophic. Firstly, the machine learning AI would be too unpredictable making level designers to go crazy because they wouldn't be able to construct levels capable of providing fun and dynamic experience because they have to know AI behavior and they want to be able to tweak the AI in certain areas in order to create compelling experience for gamers. On the other side of the line, gamers would hate this kind of AI because they wouldn't be able to learn AI's behavior and come with effective strategies for defeating it. Next, this AI could go haywire in case of a bug, because it isn't constricted by strings and a definite number of possible behaviors. And finally, if trained enough it would become too smart and adaptive that it would provide an extreme challenge even to the best players. You can't just dumb down machine learning AI, you can't control it and restrain it; you get what you get and in most cases, the result would be too challenging for ninety-nine percent of players.
But, developers can use parts of advanced AI to create a more compelling experience. For instance, companion AIs found in games like Bioshock Infinite, The Last of Us, or Half-Life 2 are there to make the game more immersive and this is the area where a quasi-intelligent, machine learning AIs can be incorporated in games. They would be able to understand and process speech, to create complete, meaningful sentences, to provide players with a living, breathing companion that would feel so alive. If there's an area where we will see a huge leap forward in the coming years, it's this one.
As for enemy AI well, they don't have to be intelligent. They need to be believable, to offer a level of realism that perfectly combines their ever-greater level of detail and their behavior. They have to offer a challenge, but to be relatively predictable so players can learn their patterns and come with winning strategies. This is the problem of modern games; they offer photorealistic visuals and enemies that look alive, but they don't behave like they are, breaking the immersion. Because of this developers have to create a new kind of enemy AI; an artificial intelligence that is a combination of limited set of behaviors combined with planning features of F.E.A.R with many small details that give away an aura of intelligence and soul like we saw in Halo, finalized with perfect animation and voice over that is the final block of a next-generation of video game AI. In other words we want video game AI that thinks, to a degree, but is not unpredictable; that looks and feels like a living being but doesn't incorporate level of intelligence of a human being; that is capable of challenging us, but not capable of creating advanced tactics during each gunfight; that it looks like it is a proper AI but isn't exactly a proper AI.
Will we ever get one of those? That question cannot be answered, not at this moment in time. Sure, we will get it eventually, but when well, no one knows that. And when we finally get a chance to play a game with this "perfect" enemy AI, this is where the true problem will start. Like today, when only a handful of games offer advanced AI NPCs, making us resent other games that fail to do so, imagine how much we will hate video games once we try the video game that will bring that "perfect" AI. Maybe it is better to just keep dreaming about it, and never actually experience the joy of finding it on the other side of some virtual battlefield.