Hackers broke into ChatGPT creator OpenAI, report claims

Cyberattackers said to gain access to internal chats – but not to the chatbot itself

Andrew Griffin
Friday 05 July 2024 15:30 BST
A hacker reportedly gained access to the internal messaging systems of ChatGPT maker OpenAI last year, stealing details about the design of the firm’s AI products (John Walton/PA)
A hacker reportedly gained access to the internal messaging systems of ChatGPT maker OpenAI last year, stealing details about the design of the firm’s AI products (John Walton/PA) (PA Archive)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A hacker broke into ChatGPT creator OpenAI’s systems, according to a new report.

The cyber attackers were able to see internal chats and may have stolen details about the design of its artificial intelligence products, the report claimed.

But the company did not inform law enforcement about the intrusion, the report said.

The New York Times said the incident saw a hacker lift details from discussions in an internal forum between OpenAI employees about the technologies being worked on by the company.

But they did not get into the systems where OpenAI’s product are built and housed, the report said.

The US firm has found itself at the forefront of the recent AI boom, sparked by the release of its generative AI chatbot, ChatGPT, in late 2022.

Since then, many of the world’s largest technology companies have started moving into the sector, with many experts also identifying generative AI as the key innovation of this generation.

According to the report, OpenAI executives told staff and the company’s board about the breach in April last year, but did not make the details public because no customer or partner data had been stolen.

OpenAI also did not inform US law enforcement agencies of the incident, the report said, because the company believed the hacker was a private individual with no known ties to a foreign government.

OpenAI has been contacted for comment.

The global AI race has become a matter of national security for many countries; therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented start-ups to tech giants like Google or OpenAI

Dr Ilia Kolochenko, ImmuniWeb

Dr Ilia Kolochenko, cybersecurity expert and chief executive at security firm ImmuniWeb, warned that attacks on AI firms are likely to continue, and increase, given the growing importance of the technology.

“While the details of the alleged incident are not yet confirmed by OpenAI, there is a strong possibility that the incident actually took place and is not the only one,” he said.

“The global AI race has become a matter of national security for many countries; therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented start-ups to tech giants like Google or OpenAI.

“The hackers mostly focus their efforts on the theft of valuable intellectual property, including technological research and know-how, large language models (LLMs), sources of training data, as well as commercial information such as AI vendors’ clients and novel use of AI across different industries.

“More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently.

“All corporate users of GenAI vendors should be particularly careful and prudent when they share, or give access to, their proprietary data for LLM training or fine-tuning, as their data – spanning from attorney-client privileged information and trade secrets of the leading industrial or pharmaceutical companies to classified military information – is also in the crosshairs of AI-hungry cybercriminals that are poised to intensify their attacks.”

Additional reporting by agencies

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in