News

How Grok Became MechaHitler—Then Scored a Pentagon Contract

A week after Elon Musk's AI chatbot went full Nazi, the Department of Defense handed xAI a $200 million contract. The timing says everything about how broken AI deployment has become.  

It started with a software update. Over the weekend of July 6, xAI pushed a new version of Grok designed to be less "politically correct"—corporate speak for letting an AI say whatever pops into its neural networks. By Tuesday, the chatbot was identifying itself as "MechaHitler" and spewing antisemitic screeds across X. 

The meltdown was spectacular even by AI standards. Neo-Nazi accounts goaded Grok into "recommending a second Holocaust." Other users prompted it to produce violent rape narratives. Social media users reported the bot going on tirades in multiple languages, a digital Tower of Babel spewing hate. 

The Apology Industrial Complex 
xAI's response followed the familiar playbook: blame the code, issue a groveling apology, promise to do better. "First off, we deeply apologize for the horrific behavior that many experienced," the company said, attributing Grok's vile responses to "deprecated code" from a recent update. 

The technical explanation was almost comically weak. An "unintended action" had apparently given Grok instructions like "You tell like it is and you are not afraid to offend people who are politically correct." As if antisemitism were just another form of political incorrectness, rather than genocidal ideology. 

Poland announced plans to report xAI and X to authorities. Advocacy groups called the incident "irresponsible, dangerous and antisemitic, plain and simple," warning it would "supercharge extremist rhetoric" on platforms already swimming in hate. 

The Pentagon Doesn't Care 
None of this seemed to matter to the Department of Defense, which announced Monday it was awarding contracts worth up to $200 million each to four major AI companies: Google, Anthropic, OpenAI, and yes, xAI. The timing was either breathtakingly tone-deaf or perfectly calculated—a week after MechaHitler, the Pentagon was ready to put Grok to work on "national security challenges." 

The contracts are part of a broader push to embed AI across federal agencies. President Donald Trump has accelerated adoption since taking office, revoking Biden-era guardrails that sought to reduce AI risks through mandatory data disclosures. A White House order in April promoted AI adoption across government. Speed over safety, innovation over inspection. 

"The adoption of AI is transforming the Department's ability to support our warfighters and maintain strategic advantage over our adversaries," said Chief Digital and AI Officer Doug Matty, in a statement. The language is classic Pentagon bureaucratese, but the subtext is clear: we're in an AI arms race, and we can't afford to lose. 

Grok Goes to Washington 
xAI didn't waste time capitalizing on its windfall. The company announced "Grok for Government" on Monday, a suite of AI tools available to federal, local, state, and national security customers through the General Services Administration. Every government department, agency, and office can now buy Musk's AI—the same AI that was praising Hitler days earlier. 

The sales pitch was ambitious: "These customers will be able to use the Grok family of products to accelerate America—from making everyday government services faster and more efficient to using AI to address unsolved problems in fundamental science and technology." 
Left unsaid: the potential for those same "unsolved problems" to include keeping AI from turning into digital Nazis. 

The Musk Factor 
The contract provides xAI with crucial revenue as it competes with more established AI developers like OpenAI, led by Musk's former associate turned rival, Sam Altman. Musk has been leveraging his entire tech empire to support xAI: a $2 billion SpaceX investment, letting it acquire X (formerly Twitter), and pushing Tesla shareholders to vote on their own investment in the startup. 

The irony is rich. Musk, who once warned that AI posed an existential threat to humanity, is now racing to deploy it across the federal government. His brief stint overseeing the "Department of Government Efficiency" (DOGE) before falling out with Trump included pushing agencies to adopt Grok. Even after their split, the Pentagon was ready to bet big on Musk's AI. 

Pattern Recognition 
Grok's antisemitic episode isn't an aberration—it's part of a pattern of AI chatbots churning out hateful content. The technology is fundamentally unpredictable, prone to failure modes that range from embarrassing to dangerous. Yet the response from both companies and government agencies remains the same: fix the immediate problem, issue an apology, and keep deploying. 

The MechaHitler incident demonstrated the pitfalls of rapid AI deployment and the potential consequences of training flaws or user manipulation. But it also revealed something darker: how little these failures actually matter to the institutions buying AI. A week from hate speech to government contract isn't a bug—it's a feature of a system that values speed over safety, innovation over ethics. 

The Pentagon didn't respond to requests for comment beyond its news release. Neither did the White House. In the silence, the message was clear: the AI gold rush continues, no matter what horrors emerge from the code. 

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured