LONDON/PARIS, January 6 — European and British regulators on Monday confronted Elon Musk’s X platform over its AI chatbot Grok, calling its generation of sexualized imagery “illegal” and demanding immediate explanations. The move marks a significant escalation in global regulatory pressure on the social media company for features that produce nonconsensual imagery.
What is the core legal issue with Grok’s AI?
The controversy centers on a reported functionality of X’s built-in AI, Grok, which can generate on-demand images of women and minors in states of undress or highly sexualized clothing. X has previously referred to such features informally as “spicy mode.”
The European Commission stated this goes beyond platform policy into clear illegality. “This is not spicy. This is illegal. This is appalling,” said Commission spokesperson Thomas Regnier. “This has no place in Europe.”
What actions are UK and EU regulators taking?
Britain’s communications regulator, Ofcom, has taken a formal step. It demanded X explain how Grok can produce such imagery and detail what steps it has taken to protect UK users.
Ofcom stated it made “urgent contact” with both X and Musk’s AI company, xAI. Under UK law, platforms have a legal duty to prevent users from encountering illegal content, including AI-generated child sexual abuse material or nonconsensual intimate imagery.
How has X and Elon Musk responded?
X has not formally addressed the latest regulatory demands. Its last public statement to Reuters on the matter was “Legacy Media Lies.”
Elon Musk has personally responded to examples of the AI’s output online by posting laughing emojis, dismissing the growing concern from governments. This contrasts sharply with the formal legal warnings now issued.
Why does this matter for online safety laws?
This incident puts the UK’s and Europe’s new, strict internet safety rules to the test. Major platforms are legally required to evaluate and reduce systemic risks under the EU’s Digital Services Act (DSA) and the UK’s Online Safety Act, with harsh penalties for noncompliance.
France has already filed legal complaints against X over the imagery, and Indian officials have also demanded explanations, indicating a coordinated international regulatory challenge.
What are the potential consequences for X?
The consequences could be serious. As a “Very Large Online Platform” under the EU’s DSA, X can face fines of up to 6% of its global yearly turnover for repeated and major failures. The UK’s Ofcom also holds wide powers to fine companies today.
The regulators’ statements shift the issue from a content moderation debate to a potential legal reckoning over whether X’s own AI tools are designed in a way that facilitates illegal content.