Italy Fines OpenAI 15m Euros Over Data Collection
Italy’s data regulator fines OpenAI 15m euros over lack of ‘adequate legal basis’ for data collection, concerns over exposure of young people
Getting your Trinity Audio player ready...
|
Italy’s data protection regulator, the Garante, has fined OpenAI 15 million euros (£12.4m) for what it said were breaches of the GDPR privacy law after concluding an investigation into the collection of personal data by the ChatGPT developer.
OpenAI processed users’ personal data to train ChatGPT “without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users”, the Garante said.
The probe, which was launched at the end of last year, also found OpenAI didn’t provide an “adequate age verification system” to prevent users under 13 from being exposed to inappropriate AI-generated content, the Garante said.
In addition to the fine the agency ordered OpenAI to carry out a six-month media campaign in Italy to raise public awareness about ChatGPT’s data collection practices.
‘Disproportionate’
OpenAI said the decision was “disproportionate” and said it would appeal.
It said it had worked with the Garante to reinstate ChatGPT after it was briefly banned in the country last year.
“They’ve since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period,” the company stated.
It said it remains committed to working with privacy authorities to offer “beneficial AI” that respects privacy rights.
The Garante said the fine was calculated to take into account OpenAI’s “cooperative stance”, indicating it could have been larger.
The GDPR, introduced in 2018, allows data regulators to fine a company up to 20 million euros or 4 percent of its global turnover.
Legal challenges
ChatGPT was banned from Italy for a month last year, but the service was reinstated after Microsoft-backed OpenAI addressed issues including the right of users to refuse consent for the use of personal data to train algorithms.
OpenAI and other AI companies also face legal challenges over their use of vast amounts of copyrighted materials to train their models.
OpenAI says its training practices are consistent with fair use.