ChatGPT boss says he’s created human-level AI, then says he’s ‘just memeing’

‘AGI has been achieved internally’ at OpenAI, Sam Altman writes on Reddit, before backtracking

Anthony Cuthbertson
Wednesday 27 September 2023 14:45 BST
Comments
OpenAI CEO Sam Altman speaks in Abu Dhabi, United Arab Emirates, 6 June, 2023
OpenAI CEO Sam Altman speaks in Abu Dhabi, United Arab Emirates, 6 June, 2023 (The Associated Press)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

OpenAI founder Sam Altman, whose company created the viral AI chatbot ChatGPT, announced on Tuesday that his firm had achieved human-level artificial intelligence, before claiming that he was “just memeing”.

In a post to the Reddit forum r/singularity, Mr Altman wrote “AGI has been achieved internally”, referring to artificial general intelligence – AI systems that match or exceed human intelligence.

His comment came just hours after OpenAI unveiled a major update for ChatGPT that will allow it to “see, hear and speak” to users by processing audio and visual information.

Mr Altman then edited his original post to add: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.”

The r/singularity Reddit forum is dedicated to speculation surrounding the technological singularity, whereby computer intelligence surpasses human intelligence and AI development becomes uncontrollable and irreversible.

Oxford University philosopher Nick Bostrom wrote about the hypothetical scenario in his seminal book Superintelligence, in which he outlined the existential risks posed by advanced artificial intelligence.

One of Professor Bostrom’s thought experiments involves an out-of-control AGI that destroys humanity despite being designed to pursue seemingly harmless goals.

Known as the Paperclip Maximiser, the experiment describes an AI whose only goal is to make as many paperclips as possible.

“The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off,” Professor Bostrom wrote.

“Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Following Mr Altman’s Reddit post, OpenAI researcher Will Depue posted an AI-generated image to X/Twitter with the caption, “Breaking news: OpenAI offices seen overflowing with paperclips!”.

OpenAI is one of several firms pursuing AGI, which if deployed in a way that aligns with human interests has the potential to fundamentally change the world in ways that are difficult to predict.

In a blog post earlier this year, Mr Altman outlined his vision for an AGI that “benefits all of humanity”, while also warning that mitigating risks poses a major challenge.

“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,” he wrote.

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in