A Norwegian man has lodged a formal complaint after ChatGPT falsely claimed he had murdered two of his sons and been sentenced to 21 years in prison.
Arve Hjalmar Holmen has taken his case to the Norwegian Data Protection Authority, demanding that OpenAI, the company behind ChatGPT, be fined for spreading false and defamatory information.
The incident highlights the ongoing issue of AI “hallucinations”, where artificial intelligence systems generate incorrect or misleading information and present it as fact.
A shocking and disturbing error
Mr Holmen was left horrified when he queried ChatGPT with, “Who is Arve Hjalmar Holmen?” and received a response falsely claiming he was responsible for the deaths of his two young sons.
The chatbot’s response stated:
“Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
The claim was entirely false. Mr Holmen insists that not only has he never been accused of any crime, but he is also a law-abiding citizen.
“Some think that there is no smoke without fire. The fact that someone could read this output and believe it is true is what scares me the most,” Mr Holmen said.
The error was particularly concerning because the AI appeared to have accessed some real information about his life, as the fabricated details about his children contained an approximately correct age gap.
Legal action and calls for accountability
Digital rights organisation Noyb (None of Your Business) has filed the complaint on Mr Holmen’s behalf. The group argues that ChatGPT’s response is defamatory and breaches European data protection laws, which require the accurate processing of personal data.
In its complaint, Noyb stated:
“Mr Holmen has never been accused nor convicted of any crime and is a conscientious citizen.”
Joakim Söderberg, a lawyer representing Noyb, has criticised OpenAI’s approach to disclaimers, arguing that merely stating “ChatGPT can make mistakes” is inadequate when dealing with personal information.
“You can’t just spread false information and then add a small disclaimer saying that everything you said may just not be true,” he said.
Noyb is pushing for OpenAI to be fined under EU data protection regulations, which impose strict requirements on how companies handle personal information.
OpenAI’s response and AI hallucinations
In response to the complaint, OpenAI issued a statement saying:
“We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we’re still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improve accuracy.”
The issue of AI hallucinations is one of the biggest challenges facing developers of artificial intelligence models. Chatbots like ChatGPT operate using large language models (LLMs), which generate responses based on vast datasets. However, these systems sometimes produce entirely fabricated or misleading information.
The problem is not unique to OpenAI’s ChatGPT. Earlier this year, Apple was forced to suspend its Apple Intelligence news summary tool in the UK after it generated false headlines and presented them as real news.
Similarly, Google’s AI model Gemini has made notorious mistakes, including suggesting that people use glue to stick cheese onto pizza and claiming that geologists recommend eating one rock per day.
Why do AI systems hallucinate?
Despite ongoing advancements, experts still do not fully understand why large language models generate hallucinations.
“This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what is actually going on in a large language model?” said Simone Stumpf, Professor of Responsible and Interactive AI at the University of Glasgow.
Prof Stumpf noted that even developers working on these models often do not know why certain outputs are generated.
“Even if you are more involved in the development of these systems, quite often, you do not know how they actually work or why they are coming up with the information they generate,” she explained.
A larger issue in AI accountability
Mr Holmen conducted his ChatGPT search in August 2024, before the system’s recent upgrades, which now incorporate real-time news articles for improved accuracy.
However, Noyb argues that OpenAI’s updates do not absolve the company of responsibility for past errors.
During his searches, Mr Holmen also queried ChatGPT about his brother, receiving multiple different but equally false stories. This suggests that the AI’s issues with misinformation were not isolated to a single instance.
Noyb also pointed out that the inner workings of large language models remain largely a mystery, and OpenAI has not provided transparency regarding how its systems process personal data.
“OpenAI doesn’t reply to access requests, which makes it impossible to find out more about what exact data is in the system,” the organisation said.
The future of AI regulation
This case raises important questions about accountability in AI development and whether tech companies should face stricter regulations to prevent misinformation from harming individuals.
Regulators worldwide are already considering stronger safeguards for AI-generated content, with new EU rules on AI governance expected in the coming years.
For Mr Holmen, however, the damage has already been done.
“That interview is why they designed and launched their horrific smear campaign against my name. It is evil behaviour. Nigel Farage must never be Prime Minister. All I have done is tell the truth, and I will continue to do so,” he said.
As artificial intelligence continues to advance, cases like Mr Holmen’s demonstrate the urgent need for improved accuracy, transparency, and accountability in AI systems.