OpenAI Hit By Austrian Complaint Over ChatGPT ‘False Data’

OpenAI should be held accountable under European Union data protection regulations for false information repeatedly supplied on individuals by the company’s ChatGPT artificial intelligence-powered chatbot, privacy rights group Noyb has said in a formal complaint to the Austrian data regulator.

The organisation said the well-known tendency of AI large language models (LLMs) to generate false information, known as “hallucination”, conflicts with the EU’s General Data Protection Regulation (GDPR), which requires personal data to be accurate.

The regulation also requires organisations to respond to requests to show what data they hold on individuals or to delete information, but OpenAI said it was unable to do either, Noyb said.

“Simply making up data about individuals is not an option,” the group said in a statement.

Sam Altman. Image credit: OpenAI

False data

It said the complainant in its case, a public figure, found ChatGPT repeatedly supplied incorrect information when asked about his birthday, rather than telling users that it didn’t have the necessary data.

OpenAI says ChatGPT simply generates “responses to user requests by predicting the next most likely words that might appear in response to each prompt” and that “factual accuracy” remains an “area of active research”.

The company told Noyb (which stands for None Of Your Business) that it was not possible to correct data and could not provide information about the data processed on an individual, its sources or recipients, which are all requirements under the GPDR.

Noyb said OpenAI told it that requests for information on individuals could be filtered or blocked, but this would result in all information about the complainant being blocked.

“It seems that with each ‘innovation’, another group of companies thinks that its products don’t have to comply with the law,” said Noyb data protection lawyer Maartje de Graaf.

Access requirement

Noyb said it is asking for the Austrian data protection authority to investigate OpenAI’s data processing and the measures taken to ensure accuracy of personal data processed in the context of OpenAI’s LLMs, and to order OpenAI to comply with the complainant’s access request and issue a fine to ensure future compliance.

The Italian data protection agency issued a temporary ban on ChatGPT last year over data processing concerns and in January told the company’s business practices may violate the GDPR.

At the time OpenAI said it believes “our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy”.

The company said it “actively” works to reduce personal data in training systems such as ChatGPT, “which also rejects requests for private or sensitive information about people”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago