OpenAI should be held accountable under European Union data protection regulations for false information repeatedly supplied on individuals by the company’s ChatGPT artificial intelligence-powered chatbot, privacy rights group Noyb has said in a formal complaint to the Austrian data regulator.
The organisation said the well-known tendency of AI large language models (LLMs) to generate false information, known as “hallucination”, conflicts with the EU’s General Data Protection Regulation (GDPR), which requires personal data to be accurate.
The regulation also requires organisations to respond to requests to show what data they hold on individuals or to delete information, but OpenAI said it was unable to do either, Noyb said.
“Simply making up data about individuals is not an option,” the group said in a statement.
It said the complainant in its case, a public figure, found ChatGPT repeatedly supplied incorrect information when asked about his birthday, rather than telling users that it didn’t have the necessary data.
OpenAI says ChatGPT simply generates “responses to user requests by predicting the next most likely words that might appear in response to each prompt” and that “factual accuracy” remains an “area of active research”.
The company told Noyb (which stands for None Of Your Business) that it was not possible to correct data and could not provide information about the data processed on an individual, its sources or recipients, which are all requirements under the GPDR.
Noyb said OpenAI told it that requests for information on individuals could be filtered or blocked, but this would result in all information about the complainant being blocked.
“It seems that with each ‘innovation’, another group of companies thinks that its products don’t have to comply with the law,” said Noyb data protection lawyer Maartje de Graaf.
Noyb said it is asking for the Austrian data protection authority to investigate OpenAI’s data processing and the measures taken to ensure accuracy of personal data processed in the context of OpenAI’s LLMs, and to order OpenAI to comply with the complainant’s access request and issue a fine to ensure future compliance.
The Italian data protection agency issued a temporary ban on ChatGPT last year over data processing concerns and in January told the company’s business practices may violate the GDPR.
At the time OpenAI said it believes “our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy”.
The company said it “actively” works to reduce personal data in training systems such as ChatGPT, “which also rejects requests for private or sensitive information about people”.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…