UK launches review of AI models such as ChatGPT

Artificial intelligence regulation to be split between human rights, health and safety, and competition

Anthony Cuthbertson
Thursday 04 May 2023 09:29 BST
Comments
Related: AI ‘godfather’ warns of ‘existential risk’ of robotic intelligence after quitting Google

A British regulator is to investigate the rise of artificial intelligence (AI) tools such as ChatGPT to see if consumers need protection from the rapidly-advancing technology.

The Competition and Markets Authority (CMA) will assess competition and consumer safety issues for companies using new tools that can produce responses to questions, and write letters or essays – and threaten jobs.

“It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information,” said CMA chief executive Sarah Cardell.

It comes on the same day the White House unveiled a similar plan to protect citizens from the dangers of AI, including $140m (£111m) for research into advances “that are ethical, trustworthy, responsible, and serve the public good”.

While research on AI has been going on for years, the sudden popularity of generative applications such as OpenAI’s ChatGPT and Midjourney have led to a scramble by governments to find ways to regulate any uncontrolled growth, and unintended consequences.

Regulators around the world are now trying to find a balance where governments could develop “guardrails” without stifling innovation.

In Britain the government plans to split responsibility for overseeing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.

Former UK chief scientific adviser Sir Patrick Vallance told MPs on the Science, Innovation and Technology Committee on Wednesday that AI could have as big an impact on jobs as the industrial revolution.

The CMA said with many of the important issues under the spotlight due to the development of AI being considered by government and other regulators, its study will focus on the implications of competition for firms and consumer protection.

It has set a deadline for views and evidence to be submitted by 2 June, with plans to report its findings in September.

In the United States, where the main business lobbying group has called for regulation of AI technology, a study began last month into possible rules to regulate the technology.

On Thursday, Vice President Kamala Harris told chief executives of tech companies including Microsoft and Google at a White House summit they have a "legal responsibility" to ensure the safety of their AI products.

“AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges,” she said. “At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.”

President Joe Biden noted last month that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilising ways.

Additional reporting from agencies.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in