ChatGPT creator OpenAI makes new tool for detecting automated text amid fear over future
Artificial intelligence could be used for automated misinformation campaigns, cheating on academic work and pretending to be human, company warns
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The creator of ChatGPT, the viral new artificial intelligence system that can generate seemingly any text, has created a new tool aimed at spotting that same automatically created writing.
OpenAI said that it had built the system as an attempt to stop the dangers of AI-written text, by allowing people to more easily spot it.
Such threats include automated misinformation campaigns, for instance, or allowing chatbots to pose as humans. It should also help protect against “academic dishonesty”, it suggested, which comes amid an increasing fear that such systems could allow students to cheat on homework and other assignments.
But it said the system is still “not fully reliable”. It can only correctly identify 26 per cent of AI-written text as being created by such a system, and incorrectly labels human text 9 per cent of the time.
It gets more reliable as the length of the text increases, and is better when used on text from more recent AI systems, OpenAI said. It is recommended only for English text, and the company warned that AI-written text can be edited to stop it being identified as such.
The company said that it was releasing an early version of the system, despite those limitations, in an attempt to improve its reliability.
But it stressed that people should not use it as a “primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text”.
It might also never be able to spot all text that was originally created by an AI system, too. While OpenAI will be able to update the system based on new workarounds, “it is unclear whether detection has an advantage in the long-term”, it warned.
As it announced the new classifier, it said that it was aware that identifying AI-written text had become a particular concern among educators. It said that it was part of an effort to help people deal with artificial intelligence in the class room, but may also prove useful to journalists and researchers.
It admitted that more work is required, however, and said that it was “engaging with educators in the US to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations, and we will continue to broaden our outreach as we learn”. It asked teachers, parents and others concerned about the issue of AI in academic settings to reach out and provide feedback, as well as consult the existing information that is available on its website.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments