ChatGPT Data Protection

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns.

To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in violation of the E.U. General Data Protection Regulation (GDPR) laws.

"No information is provided to users and data subjects whose data are collected by Open AI," the Garante noted. "More importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies."

ChatGPT, which is estimated to have reached over 100 million monthly active users since its release late last year, has not disclosed what it used to train its latest large language model (LLM), GPT-4, or how it trained it.

That said, its predecessor GPT-3 utilizes text sourced from books, Wikipedia, and Common Crawl, the latter of which maintains an "open repository of web crawl data that can be accessed and analyzed by anyone."

cybersecurityAdvertisement

The Garante also pointed to the lack of any age verification system to prevent minors from accessing the service, potentially exposing them to "inappropriate" responses. Google's own chatbot, called Bard, is only open to users over the age of 18.

Additionally, the regulator raised questions about the accuracy of the information surfaced by ChatGPT, while also highlighting a data breach the service suffered earlier this month that exposed some users' chat titles and payment-related information.

In response to the order, OpenAI has blocked its generative AI chatbot from being accessed by users with an Italian IP address. It also said it's issuing refunds to subscribers of ChatGPT Plus, in addition to pausing subscription renewals.

The San Francisco-based company further emphasized that it provides ChatGPT in compliance with GDPR and other privacy laws. ChatGPT is already blocked in China, Iran, North Korea, and Russia.

In a statement shared with Reuters, OpenAI said it actively works to "reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals."

OpenAI has 20 days to notify the Garante of the measures it has taken to bring it in compliance, or risk facing fines of up to €20 million or 4% of the total worldwide annual turnover, whichever is higher.

The ban, however, is not expected to impact applications from other companies that employ OpenAI's technology to augment their services, including Microsoft's Bing search engine and its Copilot offerings.

THN WEBINAR
Become an Incident Response Pro!

Unlock the secrets to bulletproof incident response – Master the 6-Phase process with Asaf Perlman, Cynet's IR Leader!

Don't Miss Out – Save Your Seat!

The development also comes as Europol warned that LLMs like ChatGPT are likely to help generate malicious code, facilitate fraud, and "offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style."

This is not the first time AI-focused companies have come under the radar. Last year, controversial facial recognition firm Clearview AI was fined by multiple European regulators for scraping users' publicly available photos without consent to train its identity-matching service.

It has also run afoul of privacy laws in Australia, Canada, and the U.S., with several countries ordering the company to delete all of the data it obtained in such a manner.

Clearview AI told the BBC News last week that it has run nearly a million searches for U.S. law enforcement agencies, despite being permanently banned from selling its faceprint database within the country.


Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.