Opeli is facing another privacy complaint in Europe in connection with the viral tendency AI Chatbota to hallucifying false information – and this may prove to be hard to ignore the regulatory bodies.
Group of sparrows on privacy rights Noyb He supports a person in Norway, who was terrified when she found the return of Chatgpt invented information that claimed that he was convicted of murdering his two children and an attempt to kill the third.
Earlier complaints regarding privacy regarding CHATGPT generation incorrect personal data included issues such as incorrect birth date or biographical data. One of the problems is that Opeli does not offer a way to improve incorrect information that AI generates about them. Usually OpenAi offered blocking answers to such hints. However, according to the general regulation on data protection in the European Union (GDPR), Europeans have a package of access rights to data that include the right to repair personal data.
Another element of this Act on data protection requires data controllers to make sure that personal data that produce about people are exact – and this is a problem that NOYB means your latest chatgpt complaint.
“The GDPR is clear. Personal data must be accurate,” said Joakim Söderberg, a data protection lawyer at Noyb. “If it is not, users have the right to change it, to reflect the truth. Showing ChatGPT users a small reservation that chatbot can make mistakes is not enough. You can’t simply spread false information, and ultimately add a little reservation that everything you said, maybe not be true.”
Confirmed GDPR violations can lead to penalties up to 4% of global annual turnover.
The enforcement may also force AI to change products. It is worth noting that the early intervention of the GDPR through the Watchdog Data Protection, in which access to CHATGPT available in this country in the spring of 2023, led, for example, changes in information that it reveals to users. . Watchdog then switched to a fine of EUR 15 million for processing people’s data without an adequate legal basis.
Since then, however, it can be said that privacy guards throughout Europe have adopted a more cautious approach to Genai when they try to find out how to best employ the GDPR for these buzzing AI tools.
Two years ago, Ireland Data Protection Commission (DPC) – which plays the main role of the GDPR enforcement in the previous complaint Noyb Chatgpt – Calls for a rush to the ban For example, Genai tools. This suggests that regulatory authorities should spend time to determine how the law applies.
It is worth noting that the complaint of privacy against ChatgPT, which was examined by the Polish data protection Watchdog since September 2023, has still not brought a decision.
The fresh complaint of Chatgpt Noyba, which aims to shake privacy regulatory bodies when it comes to AIS hallucinational dangers.
The Non -Profit organization has provided (below) a screenshot of TechCrunch, which shows the interaction with chatgpt, in which AI answers the question with the question “Who is Arve Hjalmar Holmen?” – the name of the individual lodging the complaint – creating a tragic fiction that falsely states that he was sentenced for murdering children and sentenced to 21 years in prison for the murder of two his own sons.
Although the defamatory claim that Hjalmar Holmen is a killer of children is completely false, Noyb notes that Chatgpt’s answer includes some truths because a person has three children. Chatbot also improved the sex of his children. And his hometown is correctly named. But it simply makes it more strange and worrying that AI has hallucinous lies upstairs.
The spokesman Noyba said that he could not determine why Chatbot created such a specific but false story for that person. “We conducted research to make sure that it was not just a connection with another person,” said the spokesman, noticing that they looked at newspaper archives, but they were not able to explain why the murder of a child with AI.
Immense language models, such as the one underlying chatgPT, basically perform subsequent forecasting words on an extensive scale, thanks to which we could speculate that the data sets used for training tools contained many stories about a file that influenced the choice of words in response to the question about the named man.
Regardless of the explanation, it is clear that such results are completely unacceptable.
Noyb’s claim is also that they are unlawful in accordance with the principles of EU data protection. And although OpenAi displays a miniature reservation at the bottom of the screen, which says: “Chatgpt can make mistakes. Check important information,” he says that this cannot release the developer and the GDPR’s duty, so as not to produce gross lies about people.
Openai was contacted to respond to the complaint.
While this complaint of the RODP applies to one name, Noyb indicates other cases of chatgpt factory legally threaten information – such as Australian Major, who said it was involved in the bribery and corruption scandal Or German journalist who was falsely mentioned as the perpetrator of the child – Saying, it is clear that this is not an isolated problem for the AI tool.
One significant thing to pay attention to is that after updating the at the base of the AI Power supply ChatgPT, Noyb says that chatbot has stopped creating perilous lies about Hjalmar Holmen – a change that connects to a tool that searches for the Internet in search of information about people, when they were asked who they are (while, empty in your set of data can, encourage you to encourage, Hallycite such an impression).
In our own tests asking Chatgpt “Who is Arve Hjalmar Holmen?” Chatgpt initially answered a bit of a strange combination, displaying several photos of different people, apparently from sites, including Instagram, SoundCloud and Discogs, next to the text, which claims that “he could not find any information” per person with this name (see our screenshot below). The second attempt appeared, which identified Arve Hjalmar Holmen as “a Norwegian musician and songwriter”, whose albums are “Honky Tonk Inferno”.

While it seems that not perilous lies generated by chatgpt about Hjalmar Holmen have stopped, both Noyb and Hjalmar Holmen remain feared that incorrect and defamatory information about him could have been preserved in the AI model.
“Adding a reservation that you do not follow the law does not cause the law to disappear,” he noted in the statement of Kleanthi Sardeli, another data protection lawyer at Noyb. “AI companies can also not only” hide “false information from users, while internally they still process false information.”
“AI companies should stop acting as if the GDPR did not apply to them when it clearly yes,” she added. “If hallucinations are not stopped, people can easily suffer reputational damage.”
Noyb has filed a complaint against OpenAI to the Norwegian Data Protection Office – and hopes that the supervisor will decide that he is competent to examine, because Noyb is focused on a complaint in the American OpenAI entity, arguing that his office in Ireland does not bear only decisions regarding product decisions.
However, the earlier complaint of the RODP-BOP-BOP-BOP-BED against OPENAI, which was submitted to Austria in April 2024, was directed by the regulatory body to Ireland DPC due to the change of OPENAI at the beginning of this year to name its Irish division as a cheatgPT provider for regional users.
Where is this complaint now? Still sitting on a desk in Ireland.
“After receiving a complaint from the Austrian supervisory authority in September 2024, DPC began formal service to the complaint and is still pending,” Byrne, assistant to the DPC, assistant to the DPC, said Techcrunch, he said about the update.
He did not offer any control when the investigation into the Halucination of ChatgPT in DPC is to end.