Facebook's artificial intelligence agents creating their own language is more normal than people think, researchers say
The messages sent by chatbots might look a little bizarre, but they aren't unusual or sinister
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Fears that computers were taking over swept the world this week when stories emerged about Facebook's AI creating its own language that researchers couldn't understand. But they might be a little misplaced.
But artificial intelligence experts have looked to calm worries that robots are becoming sentient or that we are living through the prelude to Terminator.
The messages might seem strange, they agree. But they are explicable and fairly normal in the world of artificial intelligence research.
Some of the discussion between the bots is seen below:
The messages didn't seem to be especially sinister. But the worrying nature of not being able to understand what an AI was saying or why it was saying it concerned many, and led to worries about such systems becoming sentient or conducting decisions without us being able to hold them accountable.
The story came after repeated warnings from many of the most respected minds in the world: people including Stephen Hawking have suggested that artificial intelligence could potentially bring about the end of humanity. Those predictions came to a head days before the story became popular as Elon Musk and Mark Zuckerberg argued about the dangers of AI – with Mr Zuckerberg saying that the danger had been overstated, after Mr Musk has repeatedly suggested that artificial intelligence could take over the world if it is not properly regulated and restrained.
But artificial intelligence researchers including those involved in the project have looked to calm those worries.
The idea of a chatbot inventing its own language might sound terrifying, those behind the Facebook research say. But it is actually a long-running part of the way that AI works and is studied – sometimes being encouraged, and at other times happening by itself.
Similar things have been seen in AI work done by Google for its Translate tool and at OpenAI, for instance.
In the case of the recent Facebook study, it was entirely accidental. The agents were simply not told to ensure that they worked using language comprehensible to their human masters – and so didn't.
"While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades," Dhruv Batra, who worked on the project, wrote on Facebook.
In the case of Facebook's AI, the messages might be incomprehensible but their meaning can be worked out, at least a little. It has been compared to the kinds of shorthand that are developed in all communities of specialists – where words might come to mean specific things to people, but be completely mystifying to anyone who is outside of the group.
Mr Batra also took issue with the phrasing of "shutting down" the chatbots, and said that such a decision was commonplace. Many AI experts have become irritated because some stories said that researchers had panicked and pulled the plug – but in fact researchers just changed the AI, killing the job but simply altering some of the rules that it worked by.
"Analyzing the reward function and changing the parameters of an experiment is NOT the same as 'unplugging' or 'shutting down AI'," he wrote. "If that were the case, every AI researcher has been 'shutting down AI' every time they kill a job on a machine."
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments