AI learns aggression when doesn't get its way
Friday, 10 February 2017

GOOGLE'S ARTIFICIAL INTELLIGENCE (AI) has already started to show some evidence of personality traits, but now DeepMind is learning to show signs of aggression when it thinks it's not going to get its own way. Sound like anyone you know?

Where multiple instances of DeepMind are running they can, on the other hand, agreed to work together for a common goal should they believe there's more to be gained by doing so.

It may occur to you as it has to us, that words like "aggression" and phrases like "working together" are exactly the sort of thing that doomsayers pick up on in predicting mankind's demise at the hands of robot overlords.

However, this early research based primarily around game theory is experimental in nature and about as scary as BluTac.

The research was based around two games, one centred around gathering fruits and the about hunting. You will note the pastoral nature of the games rather reflects the caveman era of AI were currently still in.

Both games used AI characters as players referred to as 'agents'. In the Apple game, for example, each is taught to score points by collecting as many apples as possible from the fictional landscape.

However, there is a twist. At any time one agent can 'blast' an opposing player with array that removes them from the game for a set period.

The agents soon learnt that the best way to win the game was to keep blasting their opponent. Essentially the agents were showing greed fuelled aggression.

In the second game, however, the agents act as wolves protecting prey, where points are scored if both walls are near the prey when it's caught, thus making it easier to defend from scavengers.

The point of all this, explains Joel Z Leibo, who led the research paper, is to better understand how to control multiagent systems, such as those that might be used in smart cities for example.

Zeibo explains, "At this point we are really looking at the fundamentals of agent cooperation as a scientific question, but with a view toward informing our multi-agent research going forward,".

Zeibo also suggests that this type of simulation could be used to sandbox situations before putting them live in an AI controlled situation.

 


Widget is loading comments...