ChatGPT built with help of underpaid, exploited Kenyan employees, report alleges

Kenyan workers were tasked with labelling content from ‘darkest recesses of the internet’, TIME reports

Vishwam Sankaran
Monday 23 January 2023 10:45 GMT
Comments
Related video: AI chatbot ChatGPT creates ethical concerns as students use it to get ahead in class

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas

Editor

OpenAI’s chatbot ChatGPT was reportedly built using vital contributions from outsourced, underpaid Kenyan labourers.

The chatbot was built with help from a Kenya-based data labeling team who earned less than $2 per hour, according to an investigation by TIME.

Outsourced Kenyan workers were also subject to graphic sexual content to clean the platform of violence and hate speech.

The labourers were sent snippets of text for labelling from the “darkest recesses of the internet” depicting graphic content like “child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest”, TIME reported.

Workers reportedly read hundreds of these kinds of entries each day for wages that raged from $1 to $2 an hour, or a $170 monthly salary.

The Kenyan team was managed by Sama, a San Francisco-based firm, which said its workers could take advantage of both individual and group therapy sessions with “professionally-trained and licensed mental health therapists”.

One worker who was tasked with text labelling text told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog. “That was torture,” he said.

Sama reportedly ended all its contracted work for OpenAI in February 2022, much earlier than planned.

In December last year, ChatGPT gained prominence for its “mind-blowing” ability to respond to a range of queries with human-like text output with researchers across the world praising the AI’s general purpose language model.

Several users also pointed out the seemingly effective system in the chatbot preventing it from racist or violent content output.

Its launch led to widespread speculation that it could revolutionise industries and may likely even replace tools like Google’s search engine.

But several institutions and scholars had also raised concerns about the widespread use of the AI chatbot disrupting academia.

The New York City education department said it was worried about the negative impacts of ChatGPT on student learning, amid “concerns regarding the safety and accuracy of content.”

“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” OpenAI chief Sam Altman wrote in a Twitter thread.

“There are going to be significant problems with the use of OpenAI tech over time; we will do our best but will not successfully anticipate every issue,” he said.

Some companies, including Google, had previously warned that releasing such an AI technology for widespread use may pose risks due to inbuilt biases and misinformation.

But Mr Altman held that the AI technology would be necessary for humanity “to fully understand the universe.”

OpenAI and Sama did not immediately respond to several approaches for comment by The Independent.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in