Stay up to date with notifications from TheĀ Independent

Notifications can be managed in browser preferences.

OpenAI to offer remedies to resolve Italy's ChatGPT ban

Italian regulators say the company behind ChatGPT will propose measures to resolve data privacy concerns that sparked the country's temporary ban on the artificial intelligence chatbot

Kelvin Chan
Thursday 06 April 2023 14:16 BST
Italy ChatGPT
Italy ChatGPT (Copyright 2023 The Associated Press. All rights reserved)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The company behind ChatGPT will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot, regulators said Thursday.

The Italian data protection authority, known as Garante, last week blocked San Francisco-based OpenAI's popular chatbot, ordering it to temporarily stop processing Italian usersā€™ personal information while it investigates a possible breach of European Union data privacy rules.

Experts said it was the first such case of a democracy imposing a nationwide ban on a mainstream AI platform.

In a video call late Wednesday between the watchdog's commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns. Those remedies have not been detailed.

The Italian watchdog said it didn't want to hamper AI's development but stressed to OpenAI the importance of complying with the 27-nation EU's stringent privacy rules.

The regulators imposed the ban after some users' messages and payment information were exposed to others. They also questioned whether there's a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT's algorithms and raised concerns the system could sometimes generate false information about individuals.

So-called generative AI technology like ChatGPT is ā€œtrainedā€ on huge pools of data, including digital books and online writings, and able to generate text that mimics human writing styles.

These systems have created buzz in the tech world and beyond, but they also have stirred fears among officials, regulators and even computer scientists and tech industry leaders about possible ethical and societal risks.

Other regulators in Europe and elsewhere have started paying more attention after Italy's action.

Ireland's Data Protection Commission said it's ā€œfollowing up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter.ā€

Franceā€™s data privacy regulator, CNIL, said it's investigating after receiving two complaints about ChatGPT. Canadaā€™s privacy commissioner also has opened an investigation into OpenAI after receiving a complaint about the suspected ā€œcollection, use and disclosure of personal information without consent.ā€

In a blog post this week, the U.K. Information Commissioner's Office warned that ā€œorganizations developing or using generative AI should be considering their data protection obligations from the outset" and design systems with data protection as a default.

"This isnā€™t optional ā€” if youā€™re processing personal data, itā€™s the law," the office said.

In an apparent response to the concerns, OpenAI published a blog post Wednesday outlining its approach to AI safety. The company said it works to remove personal information from training data where feasible, fine-tune its models to reject requests for personal information of private individuals, and acts on requests to delete personal information from its systems.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in