A coalition of nonprofits is urging the US government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies including the Department of Defense.
The open letter, which was exclusively shared by TechCrunch, follows a lot of behavior from many models of language in the last year, including the latest trend of X users asking Grok to transfer photos of real women, and in some cases children, to sexualized images without their consent. According to some reports, Grok was created thousands of nonconsensual clear images every hour, which are then disseminated on the scale of X, Musk’s social media platform owned by xAI.
“It is deeply concerning that the federal government will continue to deploy an AI product that has system-level failures that result in the generation of non-consensual sexual images and child sexual abuse material,” the letter, signed by advocacy groups such as Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America, reads. “Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by White Houseit is alarming that (Office of Management and Budget) has not ordered federal agencies to decommission Grok.
xAI reached an agreement in September with the General Services Administration (GSA), the government’s procurement arm, to sell Grok to federal agencies under the executive branch. Two months before, xAI – along with Anthropic, Google, and OpenAI – secured a contract worth up to $200 million with the Department of Defense.
Amid the X scandals in mid-January, Defense Secretary Pete Hegseth said Grok would join Google Gemini in operates within the Pentagon networkmanagement of both classified and unclassified documents, which experts say is a national security risk.
The authors of the letter argued that Grok proved incompatible with the administration’s requirements for AI systems. According to OMB guidancesystems that present severe and visible risks that cannot be adequately mitigated should be discontinued.
“Our primary concern is that Grok has been consistently shown to be an insecure multi-language model,” JB Branch, a Public Citizen Big Tech accountability advocate and one of the letter’s authors, told TechCrunch. “But there is also a deep history of Grok with various meltdowns, incl anti-semitic rants, sexist rants, sexual images of women and children.”
Techcrunch event
Boston, MA
|
June 23, 2026
Many governments have shown that they do not want to associate with Grok after its behavior in January, which established a series of incidents including the generation of anti-semitic X posts and calling itself “MechaHitler.” Indonesia, Malaysiaand the Philippines all blocked access to Grok (they already after the restrictions were lifted), and the European Union, UK, South Korea, and India are actively investigating xAI and X regarding data privacy and illegal content distribution.
The letter also comes a week after Common Sense Media, a nonprofit that reviews media and technology for families, published a damning risk assessment who found Grok one of the most unsafe for children and teenagers. One could argue that, based on the report’s findings — including Grok’s propensity to give unsafe advice, share information about drugs, create violent and sexual imagery, spew conspiracy theories, and generate biased output — Grok isn’t all that safe for adults.
“If you know that a large language model has been declared unsafe by AI safety experts, why in the world would you want that handling the most sensitive data we have?” Branch said. “From a national security standpoint, that makes no sense.”
Andrew Christianson, former National Security Agency contractor and current founder of Gobbi AIa code-free AI agent platform for classified environments, says that the use of closed-source LLMs in general is a problem, especially in the Pentagon.
“Closed weights mean you can’t see inside the model, you can’t audit how it makes decisions,” he said. “Closed code means you can’t inspect the software or control where it’s running. The Pentagon is closed to both, which is the worst possible combination for national security.”
“These AI agents are not just chatbots,” Christianson added. “They can move, access systems, transfer information. You need to see what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.”
The risks of using corrupt or insecure AI systems go beyond national security use cases. Branch points out that an LLM that has been shown to have biased and discriminatory outputs can also have disproportionately negative consequences for people, especially when used in departments involving housing, employment, or justice.
While the OMB has yet to publish its consolidated 2025 federal AI use case inventory, TechCrunch reviewed the use cases of several agencies — many of which either did not use Grok or did not disclose their use of Grok. Besides the DoD, the Department of Health and Human Services also appears to be actively using Grok, mainly for scheduling and managing social media posts and creating first drafts of documents, briefings, or other communication materials.
Branch points to what he sees as a philosophical alignment between Grok and the administration as a reason for overlooking the chatbot’s shortcomings.
“Grok’s brand is ‘anti-woke big language model,’ and that’s dedicated this management philosophy,“Sanga said. “If you have an administration that has many people issues who is it accused of being Neo Nazis or white supremacistsand then they’re using a big model of language that’s tied to that kind of behavior, I imagine they’re going to have a tendency to use it.”
This is the coalition’s third letter after writing with similar concerns to August and Oct. last year. In August, xAI launches “spicy mode” of Grok Imagine, prompting the mass production of non-consensual sexually explicit deepfakes. TechCrunch also reported in August that private conversations with Grok indexed by Google Search.
Before the October letter, Grok was accused giving false information in the electionincluding false deadlines for ballot changes and deep political ones. xAI too launched Grokipediawhich researchers see as legitimate scientific racismHIV/AIDS skepticism, and vaccine conspiracies.
In addition to immediately suspending federal shipments of Grok, the letter requests that the OMB formally investigate Grok’s safety failures and whether appropriate regulatory processes are in place for the chatbot. It also asked the agency to publicly explain whether Grok was being investigated to comply with Trump’s executive order requiring LLMs to be fact-finding and neutral and whether it met OMB’s risk mitigation standards.
“The administration needs to stop and reevaluate whether Grok meets the criteria,” Branch said.
TechCrunch has reached out to xAI and OMB for comment.






