Millions of reports of AI-enabled abuse haven’t stopped xAI, Grok’s parent company, from rolling out new and more powerful AI tools. On Sunday, xAI indicated a new version of its AI generative video model in X, Grok Imagine 1.0.
The new model can produce 10-second video clips in 720p with audio, similar to competitors such as Sora at OpenAI and Google’s I saw 3. Already, Grok’s AI video generator has generated over 1.2 billion videos in the last 30 days.
Behind Grok’s popularity is a dark story that reveals the dangers of uncontrolled AI. From the end of December to the beginning of January, many users of X asked Grok to create images of naked or undressed people, especially women, in photos shared by others on the platform. Anyone who posts a photo, as innocuous as a selfie or a group outing, on the platform can be an unwanted target of harassment.
Nudification requests are not allowed by other AI models, but Grok has no doubts about them: Its “spicy mode” can create suggestive and provocative imagery. However, what happened was more than that. This is publicly shared, unfiltered, image-based sexual abuse.
Grok created 1.8 million deep fake sexual images in nine days in January, according to a report from The New York Timeswhich constitutes 41% of the total images produced by Grok. A blind study from The Center on Countering Digital Hate estimated that Grok produced approximately 3 million sexual images in 11 days, with 23,000 of them being deeply fake pornographic images with children.
On January 6, in the midst of the scandal, X’s head of product, Nikita Bier, IMPARTED that the app recorded its highest engagement. (He did not attribute the engagement to any specific reason.)
A Jan. 8 post noted that the company has placed image creation and editing capabilities behind a paywall. And on January 14, the company SAYS it improves guardrails to prevent the creation of sexually abusive material.
But reporting quickly SHOWS those guardrails are not strong enough. Grok image generation is still available for free through its website. Today, the unveiling of Grok Imagine 1.0 heralds a major upgrade to the platform’s video creation capabilities, raising further questions about content moderation following the backlash over sexual AI imagery.
The California attorney and the UK government have OPENED xAI investigation. Indonesia and Malaysia block the X app. There are three US senators and advocacy groups called to Apple and Google to remove X from their app stores for violating terms of service.
xAI did not immediately respond to a request for comment.
The US government passed the Take It Down Act in 2025, which criminalizes the sharing of inappropriate intimate imagery and deepfakes. But the platforms have until May to set up their processes to retrieve the images, which doesn’t help current X users.
For more, read our full report on Grok’s nonconsensual sexual depiction.






