Over 100,000 ChatGPT user accounts compromised over last year, report says

Logs containing user information like IP addresses are being actively traded on dark web, report says

Vishwam Sankaran
Wednesday 21 June 2023 05:24 BST
Comments
Related video: ChatGPT Banned in Some Workplaces

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

More than 100,000 user accounts of the popular artificial intelligence chatbot platform ChatGPT have been compromised over the last year using information-stealing malware, a new report has revealed.

The report, published by Singapore-based cybersecurity firm Group-IB, identified 101,134 compromised accounts, the credentials of many of which have been traded over the last year on illicit dark web marketplaces.

At its peak in May, nearly 27,000 credentials of compromised ChatGPT accounts were traded on the dark web, the group noted, adding that the Asia-Pacific region experienced the highest concentration of ChatGPT credentials offered for sale.

This region, according to the report, accounted for almost 40 per cent of compromised accounts between June 2022 and May 2023, followed by Europe.

Since its widespread rollout in November last year, ChatGPT has seen growing use, with employees taking advantage of the chatbot to optimise their work across fields from software development to business communications.

As the chatbot stores the history of user queries and the AI’s responses, experts have warned that unauthorised access to ChatGPT accounts could expose confidential or sensitive information.

“Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials,” said Dmitry Shestakov, the head of threat intelligence at Group-IB.

Several businesses, institutions and universities across the world, including several in Japan, have either banned use of the chatbot, or have warned staff to not reveal sensitive information to the AI bot as such data can be exploited for targeted attacks against companies and their employees.

The Singapore-based cybersecurity group warned in its latest report that ChatGPT accounts have already gained popularity within underground communities on the dark web that are accessible only via special software.

Using malicious software known as info stealers, credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history and other information from browsers installed on infected computers are being stolen and sent to operators.

Logs containing user information, including data on the IP addresses, are being actively traded on dark web marketplaces, according to Group-IB.

A majority of logs containing ChatGPT accounts have been breached by the infamous Raccoon info stealer, the group noted.

Experts urge users to update passwords regularly and implement two-factor authentication for accessing their ChatGPT accounts.

Users are also advised to disable the chatbot’s chat saving feature from its settings menu or manually delete conversations immediately after use.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in