Gather round fellow gamers, young and old, and I'll tell you a tale of a golden age long past, when the creatures that roam our virtual landscapes stepped blinking into the sunlight for the first time, picked up a thigh bone and smashed stuff with it. 1998 was the year, and Half-Life the game. It brought us Marines who were seemingly capable of outflanking the player, and flushing them out with smartly thrown grenades. Even its roaches responded to light, movement and smell.
Half Life was followed soon after by Thief, whose dark streets were populated by guards with various states of awareness, able to respond to sounds made by the player and even sounds made by other AI. Then in 1999 came Unreal Tournament, featuring AI that could snipe your face off from half a mile away. Halo in 2001 had smart, surprising AI that was so good it was a reason to buy the nascent Xbox. At that point, a future of human-like AI - opponents that could adapt to the players movements and actions - didn't seem that far away.
But it never happened. From around the mid-2000s videogame AI nestled itself beneath a chest-high wall and has pretty much stayed there ever since. Even the best games of recent years have been criticised for their unimpressive AI. But is this sad ending to the story really true? Did AI programmers basically gave up at the turn of the millennium? Or has AI simply evolved in more subtle ways less perceptible to the average gamer?
“
From the mid-2000s videogame AI nestled itself beneath a chest-high wall and has pretty much stayed there ever since.
To find out, I spoke to the AI programmers of two very different games about how AI is programmed and implemented 15 years on from the Marines of Half-Life. Hugo Desmeules and Yanick Mimee are developers employed at Ubisoft Montreal, respectively responsible for the AI design and programming of Far Cry 3. Mimee argues that my personal view of the AI in the late nineties is rose- tinted. “I remember the games we were making back then in 1999 when we were programming enemies with simple brain and patterns. The technology at that time did not allow us the freedom we have right now.”
Far Cry 3 makes for an interesting case study for AI because it is open world, meaning its AI has much greater freedom of movement and action than in a linear game, and there are many more systems at work that the AI needs to deal with and respond to. Desmeules gives me an example of what a turtle needs to think about in Far Cry 3. ”So now a turtle brain no longer consists of only going left and right and being crushed by a plumber, it needs to flee progressing fire, move away from an incoming vehicle, transition from ground to water navigable areas, walk, hide, swim, and care about predators like sharks.”
On the subject of predators, their presence marks a significant advance over the AI of Far Cry 2, which lacked predators because the developers were unable to stop them from eating all the herbivores. “We developed a spawning technique that allows us take advantage of the fact that, in real life, animals don’t stay put in a single place. You don’t expect an animal to be at the exact same spot if you move 1km away and come back. This, combined with the use of another system we called the encounter system that will dynamically spawn various AI in the game based on what’s currently there, have done the trick for us.”
“
Far Cry 2 lacked predators because the developers were unable to stop them from eating all the herbivores.
This is more general Left 4 Dead style AI directorship, rather than individual AI. What of the human opponents you face on Rook island? Again, Far Cry 2's AI was criticised for being ridiculously aggressive, so for the sequel the developers opted for a “Triple state” AI which is evident in many other stealth games. Here the AI can either be idle (unaware of the players precise), suspicious (searching for the player's location) or aware (engaged in open combat). Frankly, this doesn't sound much different from the fifteen year old AI in Thief. According to Desmeules, however, Far Cry's human opponents also retain information of recent events. “The fact that they have been in alert or combat recently will leave memories inside their brain and their behaviours will change accordingly. An NPC back in idle after a combat or a search would be much more cautious back in idle state.”
The differences between the AI of Far Cry 2 and Far Cry 3 clearly demonstrate that AI programming is not static, and efforts are being made to push it forward. That said, I'm not entirely convinced that clever spawning techniques and a bit of memory retention mark great leaps forward in AI development. To locate more complex AI, it may be wise to look away from games about guns.
It's a common mistake to associate Artificial Intelligence in games entirely with FPS "bots". Even an ambitious FPS like Far Cry 3 only requires its AI to do so many things: things like hide, seek, shoot. There are other types of game that demand considerably more from their AI. One such example is Crusader Kings II, the medieval grand strategy game from Paradox Interactive.
“
Unlike an FPS bot, a strategy game AI needs to handle many, quite different areas of gameplay.
“Unlike an FPS bot, a strategy game AI needs to handle many, quite different areas of gameplay,” says Henrik Fåhraeus, project lead on CKII, “from the movement of armies and fleets to prioritising the use of limited resources like money, to deciding who its long term friends and enemies should be.”
What separates Crusader Kings from most other strategy games is it simulates the behaviours of hundreds of individual characters, from Emperors and Kings right down to courtiers and city mayors. Each of these characters has their own personality, created using a pool of personality “traits” gained and lost through genetics, education and decisions made during gameplay.
“The underlying assumption is that every character desires to protect what it already controls and to expand if possible, perhaps militarily, perhaps through marriage,” explains Fåhraeus “The personality traits modify these desires. For example, a character with the 'Ambitious' trait is more aggressive both towards external enemies and towards its liege lord, whereas a 'Craven' and 'Content' character is more likely to just sit tight.”
This means individual characters behave in a logical manner consistent with their personality (unless they have traits like “Lunatic” or “Arbitrary”, in which case their actions will be more random). Yet characters aren't given completely free reign, or the game would fall apart. There is a longer term component of CK II's AI which affects entire regions, giving states and countries historically logical goals such conquering specific lands and setting up certain trade routes.
“
It appears pockets of innovation are occurring within AI development that are specific to the game being created.
“These priorities remain fairly static and allow the many different aspects of the AI to work together in a consistent manner; declaring war over logical territories, actually trying to occupy these provinces, and allying with enemies of the target,” says Fåhraeus. This allows the character AI to busy away with their ambitions, plots and intrigues without completely upsetting the balance of the game.
Contrary to my original assertion, it appears pockets of innovation are occurring within AI development that are specific the game being created. But what does the future hold? Could improved technology such as new consoles help towards more drastic improvements? Mimee believes so. “Better hardware would mean less compromise for us. For example, we would love to see hardware that allows us for database storing facts to help the AI make more complex decisions. This would require more memory space and more processing power.”
CKII's Fåhraeus, on the other hand, believes that the type of AI currently used in grand strategy games has already reached its zenith. “We are basically just limited by the programmer time and skill. To go further, to open up dramatic new vistas, will require radical breakthroughs in the fields of neural networks and quantum computing.”
Clearly, the extent to which hardware will help depends on the type of game developers are making. But there are tools which already exist that could radically change how AI works in games. One such tool, known as evolutionary computation, works by creating a pool of random algorithms which are then optimised through a virtual natural selection process. This enables the AI to effectively learn and adapt during the process of playing a game.
“
To go further will require radical breakthroughs in the fields of neural networks and quantum computing.
This techniques is rarely used in games industry – the only well-known example of mainstream implementation is the creature AI featured in Black and White. Late last year, evolutionary computation was used by a team of PhD student programmers from Texas to pass the Turing Test in 2K's much coveted “BotPrize”. But it could be used anywhere from creating drivers in a racing game who adapt to the behaviour of their opponents over time, to rulers in a grand strategy game like CKII who actually learn as they become more experienced.
To do so, however, would require significant rethinking about how games are developed, and also how they are played. The tools may already exist, but until a developer comes along who is brave enough to use them, our virtual monkeys will find it hard to advance much further than picking up that thigh-bone.
Rick Lane pokes virtual brains and watches them wobble for a variety of website and magazines. You can watch him become self-aware on Twitter and IGN.
Source : feeds[dot]ign[dot]com