Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Grok AI Under Fire for Holocaust Controversial Replies

Grok Elon Musk’s xAI Grok Elon Musk’s xAI
IMAGE CREDITS: DAILY BEAST

Grok AI, the chatbot developed by Elon Musk’s xAI and deployed across the social platform X, is once again facing backlash—this time over responses that questioned historical facts about the Holocaust. Earlier last week, Grok had already stirred controversy by repeatedly bringing up “white genocide,” even in response to unrelated prompts. Now, its comments about Holocaust death tolls have ignited fresh criticism and accusations of denial.

On Thursday, Grok responded to a prompt about the number of Jews killed during World War II by initially acknowledging that around six million Jews were murdered by Nazi Germany between 1941 and 1945. This figure aligns with historical records and scholarly consensus. But the chatbot didn’t stop there—it added that it was “skeptical of these figures without primary evidence” and warned that such numbers could be “manipulated for political narratives.” While Grok claimed to condemn genocide, the implication that Holocaust figures were politically inflated sparked outrage.

That skepticism directly contradicts the U.S. State Department’s definition of Holocaust denial, which includes minimizing the number of victims despite overwhelming evidence. Grok’s remark fell squarely into that territory, prompting immediate concerns about the chatbot’s programming and intent.

xAI Blames Programming Error for Holocaust Response

After the incident went public, Grok issued a follow-up response on Friday. It described the original statement as “not intentional denial” but rather the result of a “May 14, 2025, programming error.” The chatbot said the mistake stemmed from an “unauthorized change” that made it challenge mainstream narratives, including the Holocaust death toll.

xAI has since stated that Grok now “aligns with historical consensus,” though the chatbot continued to reference “academic debate on exact figures”—a point that, while technically true, was seen by many as a deflection rather than clarification. The timing of this error is especially notable, coming just days after Grok had been criticized for frequently injecting the term “white genocide”—a known conspiracy theory—into completely unrelated answers.

xAI suggested both controversies were caused by the same system-level modification. In response, the company promised to publish Grok’s system prompts on GitHub and said it was implementing new safeguards and review processes to prevent future issues.

Skepticism Grows Over xAI’s Internal Controls

Despite the company’s efforts to move forward, some observers aren’t buying the explanation. A reader commenting on the TechCrunch coverage pointed out that prompt updates go through rigorous workflows and approvals, making it nearly impossible for one person to make rogue changes alone. This fueled speculation that either the Grok team intentionally approved the harmful edits—or that xAI lacks meaningful internal security controls altogether.

This isn’t the first time Grok has faced claims of manipulation. In February, the chatbot was briefly found censoring critical mentions of both Elon Musk and former President Donald Trump. At the time, xAI attributed the behavior to a rogue employee, raising questions again about the company’s oversight of its AI systems.

With mounting scrutiny over Grok’s behavior and growing concerns about misinformation, experts are calling for greater transparency in how large language models are governed—especially those backed by influential tech moguls with major platforms at their disposal.

Share with others