Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

OpenAI Faces Privacy Complaint in Europe for ChatGPT Errors

OpenAI Faces Privacy Complaint in Europe for ChatGPT's Errors OpenAI Faces Privacy Complaint in Europe for ChatGPT's Errors
IMAGE CREDITS: TELEGRAPH

OpenAI is facing a new privacy complaint in Europe over ChatGPT’s tendency to generate false and defamatory information, raising fresh concerns about its compliance with data protection laws.

Privacy advocacy group Noyb is supporting a Norwegian individual who discovered that ChatGPT falsely claimed he had been convicted of murdering two of his children and attempting to kill the third. The case highlights the broader issue of the chatbot’s “hallucinations,” where it fabricates personal information, sometimes with severe reputational consequences.

While previous complaints about ChatGPT’s inaccuracies have involved incorrect birth dates or minor biographical errors, this latest case is far more serious. Under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to request corrections to inaccurate personal data. However, OpenAI has not provided a clear mechanism for rectification, instead opting to block responses related to specific individuals.

Noyb argues that OpenAI is failing to meet its legal obligations under GDPR, which mandates that personal data must be accurate. According to Noyb lawyer Joakim Söderberg, a disclaimer stating that ChatGPT “can make mistakes” is not sufficient to absolve the company of responsibility for spreading falsehoods. Confirmed GDPR violations can lead to fines of up to 4% of a company’s global annual revenue.

Regulatory Scrutiny and Past Actions

European regulators have already taken action against OpenAI in the past. Italy’s data protection watchdog temporarily blocked ChatGPT in 2023, forcing OpenAI to introduce transparency measures. Later, the regulator fined the company €15 million for processing personal data without a legal basis.

Despite these measures, enforcement across Europe has been inconsistent. The Irish Data Protection Commission (DPC), which plays a key role in GDPR enforcement, has previously advised against rushing to ban generative AI tools. Meanwhile, a separate complaint against ChatGPT filed in Poland in 2023 remains unresolved.

Noyb’s latest complaint seeks to accelerate regulatory action, highlighting the dangers of AI-generated misinformation. The group shared a screenshot showing ChatGPT falsely claiming that Norwegian citizen Arve Hjalmar Holmen had been convicted of child murder. While the chatbot correctly identified some details—such as his number of children and hometown—it fabricated a highly damaging and entirely false criminal history.

The reasons behind ChatGPT’s false claims remain unclear. Noyb investigated historical records and found no evidence linking Holmen to any such crime, ruling out a simple case of mistaken identity. The problem may stem from the vast datasets used to train the AI, which contain numerous crime stories that could influence its predictions.

Ongoing Scepticism Despite Corrections

Although an updated version of ChatGPT no longer generates the false claim, Noyb remains concerned that incorrect and defamatory information could still be stored within OpenAI’s model. The organization insists that simply hiding false data from users does not mean it has been erased from the system.

“AI companies cannot act as if GDPR doesn’t apply to them,” said Noyb lawyer Kleanthi Sardeli. “If AI-generated hallucinations are not addressed, individuals could suffer serious reputational damage.”

Noyb has filed the complaint with Norway’s data protection authority, arguing that OpenAI’s U.S. headquarters should be held accountable rather than its Irish division. However, a similar complaint filed in Austria in 2024 was transferred to the Irish DPC, where it remains under review.

The DPC has confirmed that it is handling the Austrian case but has not provided a timeline for its conclusion. If regulators determine that OpenAI violated GDPR, the case could set a precedent for how generative AI tools handle personal data moving forward.

Share with others