Italy Fines OpenAI 15m Euros Over Data Collection

Italy’s data protection regulator, the Garante, has fined OpenAI 15 million euros (£12.4m) for what it said were breaches of the GDPR privacy law after concluding an investigation into the collection of personal data by the ChatGPT developer.

OpenAI processed users’ personal data to train ChatGPT “without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users”, the Garante said.

The probe, which was launched at the end of last year, also found OpenAI didn’t provide an “adequate age verification system” to prevent users under 13 from being exposed to inappropriate AI-generated content, the Garante said.

In addition to the fine the agency ordered OpenAI to carry out a six-month media campaign in Italy to raise public awareness about ChatGPT’s data collection practices.

Image credit: Levart Photographer/Unsplash

‘Disproportionate’

OpenAI said the decision was “disproportionate” and said it would appeal.

It said it had worked with the Garante to reinstate ChatGPT after it was briefly banned in the country last year.

“They’ve since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period,” the company stated.

It said it remains committed to working with privacy authorities to offer “beneficial AI” that respects privacy rights.

The Garante said the fine was calculated to take into account OpenAI’s “cooperative stance”, indicating it could have been larger.

The GDPR, introduced in 2018, allows data regulators to fine a company up to 20 million euros or 4 percent of its global turnover.

Legal challenges

ChatGPT was banned from Italy for a month last year, but the service was reinstated after Microsoft-backed OpenAI addressed issues including the right of users to refuse consent for the use of personal data to train algorithms.

OpenAI and other AI companies also face legal challenges over their use of vast amounts of copyrighted materials to train their models.

OpenAI says its training practices are consistent with fair use.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK Investigates IBM’s Planned $6.4bn HashiCorp Acquisition

UK competition watchdog launches Phase 1 inquiry into IBM's planned acquisition of cloud service provider…

15 hours ago

Volkswagen Subsidiary Leak Exposes Personal, Location Data

People's personal and location data has been exposed after a data leak at Cariad -…

16 hours ago

FTX Executives See Prison Sentences Reduced – Report

Two executives involved in the notorious crypto fraud at FTX have reportedly had their prison…

17 hours ago

Beijing Denies Involvement In US Treasury Cyberattack

China's foreign ministry slams “groundless” accusations that a China state-sponsored actor hacked US Treasury systems

18 hours ago

Do Kwon Finally Extradited to United States

Do Kwon, the disgraced founder of collapsed crypto company Terraform Labs, extradited to the US…

21 hours ago

SpaceX Agrees ‘Direct-To-Cell’ Starlink Service For Ukraine

Ukrainian mobile operator Kyivstar signs agreement with Elon Musk's SpaceX for 'direct-to-cell' satellite connectivity

23 hours ago