Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Meta Risks Lawsuit from Noyb for AI Training User Data in EU

Meta CEO MARK ZUCKERBERG Meta CEO MARK ZUCKERBERG
IMAGE CREDITS: FLICKR

Privacy watchdog noyb (none of your business) has issued a cease-and-desist letter to Meta’s Irish headquarters, warning the tech giant to halt plans to use Facebook and Instagram user data for AI training without explicit opt-in consent. If Meta proceeds, the group says it will file a class action lawsuit under European privacy laws.

This latest challenge stems from Meta’s announcement to begin training its AI models using public data from adult users in the EU starting May 27, 2025. The plan follows a pause in June 2024, after Irish regulators raised initial red flags over the legality of Meta’s proposed data collection practices.

But noyb says Meta is again attempting to push forward by relying on “legitimate interest” instead of GDPR-compliant consent—a move the group calls both unlawful and unnecessary.

“Meta starts a huge fight just to have an opt-out system instead of an opt-in system,” said Max Schrems, chairman of noyb. “This is neither legal nor necessary. It’s absurd to claim that taking everyone’s personal data is essential for AI training.”

Meta’s AI Plan Faces GDPR Firestorm

Under General Data Protection Regulation (GDPR) rules, data controllers must obtain clear, informed, and freely given consent for processing personal data—particularly for purposes beyond the original scope, such as AI training. Meta, however, has opted for an opt-out mechanism, claiming that its “legitimate interest” justifies the processing of public posts, images, and interactions shared by adult users across its platforms.

Noyb strongly contests this basis, calling it a “legal fiction”. The group also argues that Meta is limiting users’ ability to opt-out before data training begins, effectively placing the burden on individuals to protect their own rights.

Even if only 10% of users opt in, the group warns, that would still generate a massive dataset large enough for Meta to train AI models in European languages—a justification the company previously cited to support its position.

While Meta claims to provide a “clear” option for objection, noyb says the company’s approach violates the principles of transparency, necessity, and user autonomy enshrined in the GDPR.

The advocacy group also criticized data protection regulators across Europe for remaining largely silent, despite the widespread implications of Meta’s AI training practices. “It therefore seems that Meta simply moved ahead anyways – taking another huge legal risk in the E.U. and trampling over users’ rights,” noyb said.

Meta Rejects Claims, But Past Behavior Raises Doubts

Meta has pushed back on the accusations, telling Reuters that noyb’s claims are incorrect on both the facts and the law. The company insists it has given users a clear way to object to their data being used.

But this isn’t Meta’s first GDPR showdown. In August 2023, it switched from “legitimate interest” to a consent-based model to serve targeted ads in Europe—after mounting regulatory pressure.

At the same time, European courts are tightening the screws. Just recently, the Belgian Court of Appeal declared the Transparency and Consent Framework—used by giants like Google, Amazon, and Microsoftillegal under GDPR, reinforcing that informed, opt-in consent is mandatory for data-driven services.

As Meta moves forward with AI ambitions, its approach to data rights will remain a flashpoint. If noyb follows through on its legal threats, it could become one of the most significant Meta AI GDPR violation cases yet—and a litmus test for how Big Tech operates in post-AI Europe.

Share with others