The European Commission has initiated a formal investigation into X under the Digital Services Act to determine if the platform failed to assess the risks of its Grok AI tool before deployment. This probe follows reports that the AI was used to generate sexually explicit content, including material that may constitute child sexual abuse.
EU officials stated that the potential harms of the tool appear to have materialized, leading to the creation of manipulated images that degrade women and children. Authorities in both the European Union and the United Kingdom are now examining whether the platform prioritized its service growth over the legal rights and safety of its users.
The investigation was spurred by widespread reports of Grok-generated deepfakes, prompting the UK Information Commissioner's Office and Ofcom to seek details on the platform's data protection and safety compliance. Similar pressure emerged from the United States, where the California Attorney General opened an inquiry into the nonconsensual sexually explicit material produced by the chatbot.
In response to the mounting legal scrutiny, X announced it would restrict Grok’s image generation features to paid subscribers. However, this move drew sharp criticism from government spokespeople who argued that turning a tool capable of creating unlawful content into a premium service is an insult to victims of sexual violence and misogyny.
As a designated very large online platform with over 45 million monthly users in the EU, X is legally required to mitigate systemic risks such as the spread of illegal content. This formal proceeding will now determine if the company met its obligations under the Digital Services Act or if it failed to protect citizens from the fundamental rights violations associated with its AI technology.
Source: EU Launches Investigation Into X Over Grok Generated Sexual Images


