OpenAI has built a system that can identify text made by ChatGPT – but it is worried about releasing it
Tool could ‘stigmatise the use of AI’, creators warn
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.ChatGPT creators OpenAI has developed a system that can identify text made by the AI system – but is worried about using it.
Making it easy to spot AI-generated words could lead to a stigma against using it, the company warned. Among other things, that might make ChatGPT and other similar tools less popular.
Since ChatGPT was first released, at the end of 2022, it has become one of the most popular websites in the world. That has included a growing use in situations such as education, with teachers increasingly reporting that students are able to generate essay answers and other work using the AI system.
In response to those worries and others, AI companies including OpenAI have been working on building tools that would be able to identify text when it is generated by an artificial intelligence system such as ChatGPT.
Now the company has developed a system that can spot AI-generated text in almost all cases, according to reports. But it is worried that being able to identify the text with such certainty could cause problems of its own.
The tool, which has been reportedly in the works for more than a year, leaves the text equivalent of a watermark in the words that are generated. That pattern would not be recognisable to any person who generated or read the text – but can be easily spotted by a companion AI system that could be used by teachers to see if their students are cheating, for instance.
But the company is worried that doing so would lead fewer people to use it, a new report says. An internal report is said to have shown that nearly 30 per cent of ChatGPT users said they would use it less if such a watermarking system were in place.
The creation of the tool and the fear that it could limit the use of ChatGPT were first reported by the Wall Street Journal. After the paper published its report, OpenAI updated a blog post from May in which it had discussed the possibility of watermarking text.
In that blog post, it said that its teams “have developed a text watermarking method that we continue to consider as we research alternatives”. The method is “highly accurate and even effective” against “localised” tampering, such as paraphrasing the text from ChatGPT, but it said it could be fooled by other, more “globalised” techniques, such as translating it into a different language or using another AI model to reword it.
But it also noted that the “text watermarking method has the potential to disproportionately impact some groups”. It could “stigmatise use of AI as a useful writing tool for non-native English speakers”, the company warned.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments