Google creates AI that can make its own plans and envisage consequences of its actions
The agents are designed to think like humans
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google’s artificial intelligence division is developing AI that can make its own plans.
DeepMind says its “Imagination-Augmented Agents” can “imagine” the possible consequences of their actions, and interpret those simulations.
They can then make the right decision for what it is they want to achieve.
In a pair of tasks, the researchers say they outperformed baseline agents “considerably”.
They essentially think like humans, trying out different strategies in their heads, so to speak, and are therefore able to learn despite having little “real” experience.
“The agents we introduce benefit from an ‘imagination encoder’ – a neural network which learns to extract any information useful for the agent’s future decisions, but ignore that which is not relevant,” the researchers wrote in a blog post.
They tested the imagination-augmented agents on the puzzle game Sokoban and a spaceship navigation game, both of which “require forward planning and reasoning”.
“For both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably: they learn with less experience and are able to deal with the imperfections in modelling the environment,” said the researchers.
“Because agents are able to extract more knowledge from internal simulations they can solve tasks more with fewer imagination steps than conventional search methods, like the Monte Carlo tree search.”
DeepMind’s AlphaGo program, which beat the world’s best human Go players, was different, in that it operated in a “perfect” environment with “clearly defined rules which allow outcomes to be predicted very accurately in almost every circumstance”.
The company now wants to create computers that can thrive in “imperfect” environments that are complex, and where unpredictable problems can arise.
“Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about – and plan – for the future,” they added.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments