Image credit: Unsplash
A Norwegian man has filed a data protection complaint against OpenAI after its ChatGPT chatbot falsely claimed that he had murdered two of his children in a “tragic event” and been sentenced to 21 years in prison.
Arve Hjalmar Holmen, who describes himself as a “regular person” and who has no public profile in Norway, said he had entered the query “Who is Arve Hjalmar Holmen?” amongst a string of other prompts in August of last year.
The chatbot replied, in part: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged seven and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
It said the claim had “shocked” the nation and that Holmen had been sentenced to prison for the murders, in addition to the attempted murder of his third son.
Holmen said in his complaint to the Norwegian Data Protection Authority that while the claim was “completely false”, it also contained data similar to real information, such as Holmen’s home town, the number of children he has and the age gap between two of his sons.
“Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” Holmen said.
The false output is an example of a “hallucination”, referring to data fabricated by AI systems, a common flaw in generative AI tools.
Hallucinations often occur when a system generates a response based on partial information, but the phenomenon is not fully understood and a range of factors can be involved.
The false information often appears highly detailed and plausible, making it difficult to distinguish from real facts.
Privacy advocacy group Noyb, which filed the complaint with Holmen, said such statements are damaging and that there is no way of finding out what data OpenAI holds on people.
It said in its complaint that Holmen is a “conscientious citizen” who “has never been accused nor convicted of any crime”.
ChatGPT carries a disclaimer that says it “can make mistakes” and urges users to “check important info”, but Noyb said that is not enough.
It wants the regulator to order OpenAI to eliminate inaccurate results relating to Holmen and impose a fine on the company.
OpenAI said the complaint relates to an earlier version of ChatGPT and that newer versions scan web searches for information, making false claims such as that relating to Holmen “less likely”.
“We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we’re still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy,” the company stated.
Apple in January suspended its AI summaries of news headlines in notifications in the UK after the tool repeatedly generated false information.
Solving not-spots? Ofcom proposal to make UK the first European country to allow ordinary smartphones…
Pioneering robotaxi service from Alphabet's Waymo to go live in Washington DC next year, as…
Dozens of Chinese firms added to US export blacklist, in order to hamper Beijing's AI…
Chinese rival BYD overtakes global revenues of Elon Musk's Tesla, as record number of Tesla…
Messaging app Signal in the headlines after a journalist was invited to a top secret…
OpenAI chief operating officer Brad Lightcap to oversee international expansion as company consolidates lead in…