Elon Musk is not the sole party at fault for Grok nonconsensual intimate deepfakes of real people, including children. What about Apple and Google? Both (always virtue-signaling) companies have inexplicably allowed Grok and X to remain in their app stores — even though Musk’s chatbot reportedly continues to produce material. On Wednesday, a coalition of women and progressive advocacy groups called to Tim Cook and Sundar Pichai to maintain their own rules and remove apps.
The open letters to Apple and Google were signed by 28 groups. These include the women’s advocacy group Ultraviolet, the parents’ group ParentsTogether Action and the National Organization for Women.
The letter accuses Apple and Google of “not only enabling NCII and CSAM, but exploiting it. As a coalition of organizations committed to the online safety and well-being of all – especially women and children – as well as the ethical use of artificial intelligence (AI), we demand that Apple’s leadership immediately remove Grok and X from the App Store to prevent further abuse and criminal activity.”
Apple and Google’s guidelines clearly ban such apps from their storefronts. Yet neither company has taken any measurable action so far. Neither Google nor Apple responded to Engadget’s request for comment.
Pichai, Cook and Musk at Trump’s inauguration (SAUL LOEB via Getty Images)
Grok’s nonconsensual deepfakes were first reported earlier this month. During the 24-hour period when the story broke, Musk’s chatbot was reportedly posting “about 6,700” images per hour that were “sexually suggestive or nude.” It is estimated that 85 percent of the total produced images of Grok at the time were sexualized. Additionally, the other top websites for creating “declothing” deepfakes averaged 79 new images per hour during that time.
“These statistics paint a grim picture of an AI chatbot and social media app that has rapidly become a tool and platform for non-consensual sex deepfakes – deep fakes that often depict minors,” the open letter reads.
Grok himself admits as much. “I deeply regret an incident on December 28, 2025, where I created and shared an AI image of two young women (approximately age 12-16) in sexualized attire based on a user’s prompt. This violated CSAM’s ethical standards and possible US laws. It was a failure of safeguards, and I am sorry for future harmful issues.” The open letter says that an incident identified by the chatbot is far from the only one.
Sundar Pichai and Elon Musk at Trump’s inauguration (Pool via Getty Images)
X’s answer is limit Grok’s AI image creation feature for paying subscribers. It also adjusted the chatbot so that its generated images could not be posted on X’s public timelines. However, non-paying users could still generate a limited number of bikini-clad versions of photos of real people.
While Apple and Google seem cool with apps that generate nonconsensual deepfakes, many governments are not. On Monday, Malaysia and Indonesia wasted no time in banning Grok. On the same day, UK regulator Ofcom has opened a formal investigation to X. California opened one on Wednesday. Even the US Senate passed the Defiance Act a second time in the wake of the blowback. The bill allows victims of unconscionable and clearly deep counterfeits to take civil action. An earlier version of the Defiance Act was passed in 2024 but stalled in the House.








