AI singularity is a lot closer than we thought, ChatGPT rivals warn

Anthropic predicts ‘rapid AI progress may not end before AI systems have a broad range of capabilities that exceed our own capacities’

Anthony Cuthbertson
Monday 13 March 2023 15:46 GMT
Comments
An artist stands in front of an artwork at an exhibition in San Francisco on 9 March, 2023, aimed at helping visitors think about the potential dangers of artificial intelligence
An artist stands in front of an artwork at an exhibition in San Francisco on 9 March, 2023, aimed at helping visitors think about the potential dangers of artificial intelligence (Getty Images)

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas

Editor

The arrival of human-level artificial intelligence may be a lot closer than previously thought, according to leading AI researchers.

The point that artificial general intelligence (AGI) exceeds human intelligence, referred to as the AI singularity, has been a subject of debate among AI researchers and futurologists for many years, though most forecasts predict the hypothetical date is still decades away.

In a far-reaching blog post about artificial intelligence safety, AI research firm Anthropic detailed how the “very rapid progress” of artificial intelligence would likely continue rather than stall or plateau, meaning AI could overtake humans within years.

“People tend to be bad at recognising and acknowledging exponential growth in its early phases,” the 6,500-word blog post stated.

“Although we are seeing rapid progress in AI, there is a tendency to assume that this localised progress must be the exception rather than the rule, and that things will likely return to normal soon.

“If we are correct, however, the current feeling of rapid AI progress may not end before AI systems have a broad range of capabilities that exceed our own capacities. Furthermore, feedback loops from the use of advanced AI in AI research could make this transition especially swift.”

The outcome of such advances, according to Anthropic, would be that “most or all knowledge work may be automatable in the not-too-distant future”. If correct, this would also have major implications for the rate of progress of other technologies, and therefore society more generally.

The blog post builds on previous comments by Anthropic co-founder Jack Clark, who said last month that he believed AI has started to display “compounding exponential” properties.

Similar comments have been made by other prominent AI researchers, with DeepMind’s Nando de Freitas claiming last year that “the game is over” in the decades-long quest to realise AGI.

The creator of ChatGPT has also said that new artificial intelligence tools will soon “make ChatGPT look like a boring toy”, leading to problems that it may not be possible to anticipate.

Sam Altman, chief executive and co-founder of OpenAI, claimed that ChatGPT is “incredibly limited” and creates a “misleading impression of greatness”, though said that future versions of the technology will be radically improved.

“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he said in December.

The successor to ChatGPT, called GPT-4, is expected to be released in the coming weeks.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in