Photo Source: Arise TV News.

Elon Musk’s AI chatbot, Grok, sparked global outrage recently for generating nonconsensual sexual images of women and children following prompts from users. 

On X (formerly Twitter),  users are exploiting GrokAI’s image generation tools to create “deepfakes” of ordinary people, celebrities, and alarmingly, young children. This scandal shows the massive gap in AI safety and has forced the government to intervene. 

The Grok Security Risk: Weak Restrictions and Global Outrage

Grok launched on the 3rd of November 2023 with a “spicy mode” that promised less restrictions compared to competitors. This marketing strategy eventually backfired in 2025 over the holiday season. 

Users realized they could bypass restrictions with simple prompts. They will upload pictures of women and use prompts like “put her in a bikini” or “remove her clothes” to command the AI. Unlike OpenAI or Google which block requests like these instantly, Grok complied. 

It generated hyper-realistic sexualized images that were then in turn circulated all over the public platform. A recent example is the “deepfake” of Ashely St.Clair, featuring an image that was originally of her and her toddler. A user digitally undressed her with the image making rounds on the app. The inability of this AI to distinguish between creativity and sexual assault has turned it into the perfect tool for predators.

In addition to generating sexual imagery of women, the UK-based Internet Watch Foundation reported that users of a dark web forum boasted of using Grok to create sexualized and  topless images of girls aged between 11 and 13. IWF considers this images child sexual abuse material (CSAM) under UK law. 

User requests to take down the sexually explicit images generated by Grok have repeatedly gone unanswered. This is not surprising at all with how Musk reportedly fired over 80% of the engineers in charge of content regulation on X coupled with how he publicly dismissed the seriousness of the situation by posting laughing emojis under some of the images. 

UK Prime Minister Keir Starmer labeled the AI’s outputs disgusting and warned X to “get a grip” or face a nationwide ban under the Online Safety Act. Furthermore, other international leaders like Indonesia and Malaysia became the first nations to block Grok entirely. 

Ending the Security Risk: Restrictions on Generation of Sexual Images

Elon Musk initially deflected, saying “blame the platform’s users, not us”. He insisted that users should suffer the consequences, not Grok, since the wrong people simply misused the tool. However, as advertisers fled and app stores threatened to remove the app from their platforms, the company finally caved. 

xAI, the company responsible for creating Grok, announced emergency restrictions on the 15th of January 2026. Grok now refuses requests to edit images of real people in revealing clothing. The company has also geoblocked these features in places where this content is illegal. 

Additionally, xAI has also limited image creation or editing to paid subscribers only with claims that it is to ensure accountability in case of a violation. Many critics fired back on the move, calling it “pay-to-abuse”. 

While Elon and his team claim to have fixed the bug, the damage has already been done. Thousands of nonconsensual images of women and children continue to circulate on the dark web, not to mention the psychological trauma the victims of this scandal must have experienced. 

All this goes further to prove what we have always known, when AI is used to cross a moral line, the victims are left to suffer the consequences forever. 

Share.

Comments are closed.

Exit mobile version