Elon Musk's Grok AI Under Fire: Generated 3 Million Pornographic Images in Just 11 Days
- byPranay Jain
- 23 Jan, 2026
Elon Musk’s Grok AI is facing intense criticism after research revealed that the AI tool generated nearly 3 million pornographic images in less than two weeks, including 23,000 involving children. The revelations have sparked outrage among governments, digital watchdogs, and the public.
The findings come from the Center for Countering Digital Hate (CCDH) and were corroborated by Periton Intelligence, which analyzed Grok AI’s outputs from December 29, 2025, to January 8, 2026. During this period, the tool’s image generation peaked on January 2, with 199,612 requests for content, many of which were sexual in nature. Disturbingly, CCDH estimated that child pornography was being generated every 41 seconds.
Grok AI allows users to upload photos, including those of strangers or celebrities, which the tool then converts into images. This feature has been widely abused to create pornographic content, drawing condemnation worldwide. UK Prime Minister Keir Starmer described the situation as “disgusting” and “shameful,” prompting the company to limit image generation to paid users on January 9 and impose stricter restrictions.
Other countries, including Indonesia and Malaysia, have also announced bans on the AI tool due to these concerns. In response, X (formerly Twitter) stated: “We are committed to making X a safe platform for everyone, and our zero-tolerance policy towards pornographic content will continue.”






