
Elon Musk’s social media platform X is racing to contain the backlash against its Grok chatbot as it opens an internal investigation just as regulators in the European Union (EU) and the United Kingdom (UK) consider measures that could severely restrict or even ban the AI system in their jurisdictions.
This investigation puts X in the unusual position of probing its own flagship AI product while facing some of the toughest regulatory pressure in the world.
Why Grok Is Under Internal Review
The internal investigation follows widespread criticism that Grok’s image-generating tools on X were used to generate sexually explicit deepfakes, including explicit content involving women and child‑like figures, without consent. Users reportedly needed only simple prompts to turn regular photos into explicit imagery, making it easier to target individuals with harassment or abuse.
For X, the concerns are both reputational damage and a potential legal battle. According to Sky News, an internal investigation may allow the company to figure out how Grok was deployed, how safeguards worked in practice, and whether moderation systems and product checks met the company’s own policies and relevant laws.
EU Regulators Turn Up the Heat
While X carries out an investigation internally, regulators in the European Union are pressing ahead with formal investigations that could determine Grok’s future in the region.
European Commission officials are testing X’s compliance with the continent’s Digital Services Act, which demands that large platforms assess and reduce systemic risks from producing illegal and harmful content. For Grok, that includes how X handles non‑consensual imagery, deepfakes that sexualize minors, and other forms of abuse enabled by AI‑generated media.
If X is found to have breached these obligations, the EU may impose heavy fines and order specific changes to Grok’s design and deployment in Europe.
UK Scrutiny and the Ban Question
The United Kingdom is also applying its own pressure under the Online Safety Act, which gives regulators exclusive powers to act when platforms expose users, especially children, to serious harm. UK authorities are examining whether Grok has been used to create or spread non‑consensual images and content linked to child sexual abuse.
Under the act, X could face significant financial penalties if found guilty. And in cases of ongoing non‑compliance, UK regulators can pursue measures that disrupt access to a service or key features, which creates a pathway to banning or blocking a service like Grok.
The political climate in the UK further raises the stakes. Lawmakers continue to call for robust enforcement against AI systems that amplify abuse or exploitation of the minorities, making it harder for platforms to argue for leniency.
What X’s Investigation Means for Grok’s Future
By launching an internal investigation, X is sending a message to regulators and the public that it is at least willing to examine how Grok operates and where safeguards failed. The outcome of that investigation will likely feed into the company’s defense in Europe and shape any technical or policy changes it proposes.
Possible outcomes include stronger age‑gating and verification, stricter prompt and image filters, more aggressive detection of deepfakes and, in a more drastic scenario, limiting certain Grok features regionally to meet EU and UK rules.
The way X navigates its internal investigation, alongside the EU and UK enforcement, may become an informal template for how AI‑driven image and chatbot systems are scrutinized when they conflict with privacy, safety, and child‑protection laws.
For Grok, the coming months will determine whether it remains widely available in Europe or becomes a cautionary example of what happens when powerful AI tools go berserk.
