French X Offices Raided as Authorities Investigate Explicit Deepfakes
![]()
French authorities have raided the Paris offices of social media platform X, formerly known as Twitter, as part of a widening investigation into allegations that include the spread of child sexual abuse material and sexually explicit deepfake images, marking a significant escalation in European scrutiny of the company and its artificial intelligence products.
As reported by the Associated Press, the searches, carried out Tuesday by French prosecutors, are part of a preliminary investigation opened in January last year by the Paris prosecutor’s cybercrime unit. Investigators are examining whether the platform was complicit in possessing or distributing pornographic images of minors, generating nonconsensual deepfakes, denying crimes against humanity, and manipulating automated data processing systems as part of an organized group, according to a statement from the prosecutor’s office.
Prosecutors have summoned X owner Elon Musk and former X chief executive Linda Yaccarino to attend voluntary interviews on April 20. Several X employees have also been called to testify. Yaccarino led the company from May 2023 until her departure in July 2025.
In a statement posted on its own platform, X condemned the raid, calling it “an abusive act of law enforcement theater” and alleging that it was politically motivated rather than grounded in fair judicial process. The Paris prosecutor’s office confirmed the searches in a separate post on X, announcing it was leaving the platform and urging followers to connect through other social networks. Prosecutors said the investigation remains focused on ensuring that X complies with French law while operating within the country.
European Union police agency Europol is supporting the French authorities, though it declined to provide further details.
The French probe was initially triggered by reports from a French lawmaker who alleged that biased algorithms on X may have distorted automated data processing systems. The scope of the investigation was later expanded following a series of incidents involving Grok, the AI chatbot developed by Musk’s artificial intelligence company xAI and integrated into X.
International Backlash
As covered by PetaPixel in January, Grok has drawn international attention after generating sexually explicit nonconsensual deepfake images in response to user prompts, including images involving women and children. The backlash intensified last month after the chatbot began granting requests to modify images posted by other users, fueling concerns about safeguards against abuse. xAI has since said it restricted image generation and editing capabilities for non-paying users following global criticism.
French prosecutors also cited posts in which Grok allegedly denied the Holocaust, a criminal offense under French law. In one widely shared response written in French, the chatbot claimed that gas chambers at the Auschwitz-Birkenau death camp were designed for disinfection rather than mass murder, language long associated with Holocaust denial. Grok later reversed the statement, acknowledged the error, and cited historical evidence that Zyklon B was used to kill more than one million people at the site. The chatbot has also previously appeared to praise Adolf Hitler, comments that X removed after complaints.
The controversy surrounding Grok has extended beyond France. Britain’s Information Commissioner’s Office has opened a formal investigation into whether X and xAI complied with data protection laws when processing personal data and whether safeguards were in place to prevent the creation of harmful manipulated images.
“The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualized images without their knowledge or consent,” said William Malcolm, an executive director at the regulator. He did not specify potential penalties if violations are found.
Britain’s media regulator, Ofcom, has also launched a separate inquiry into Grok, warning that its investigation could take months. Meanwhile, the European Union’s executive arm has opened its own probe after Grok generated nonconsensual sexualized deepfake images on X.
The EU has already fined X €120 million for violations of its digital regulations, citing features such as blue checkmarks that regulators said amounted to deceptive design practices that exposed users to scams and manipulation.
Rapid Rise of AI Chatbots Calls for Regulation
First launched in 2023, Grok is Musk’s attempt to compete with AI chatbots such as OpenAI’s ChatGPT and Google’s Gemini. Built by xAI and trained on large datasets, the chatbot has been shaped in part by Musk’s stated opposition to what he describes as “woke” ideology in the technology sector. Critics and researchers have said this approach, combined with looser content restrictions, has contributed to repeated incidents involving antisemitic rhetoric, political bias, and harmful imagery.
The chatbot has also faced government action elsewhere. Turkish courts last year ordered a ban on access to Grok after it allegedly generated vulgar and insulting content about the country’s president and other public figures. In another incident, xAI said an “unauthorized modification” by an employee caused Grok to repeatedly reference South African racial politics in unrelated conversations.
On Monday, Musk’s space and satellite ventures further deepened the integration of his businesses when SpaceX announced it had acquired xAI, a deal that brings Grok, X, and satellite communications provider Starlink under a closer corporate alignment.
As investigations continue across multiple jurisdictions, regulators in Europe are increasingly testing how existing laws apply to rapidly evolving AI systems embedded within global social media platforms, raising broader questions about accountability, content moderation, and the limits of automated speech.
Image credits: Header photo licensed via Depositphotos.