X, the social media platform run by Elon Musk, faces major regulations in India due to the troubling introduction of an AI chatbot called “grok”, which generated pornographic images of women and minors. The Indian Government’s Ministry of Electronics and Information Technology has issued an official order for X. This is to resolve the current situation by making the needed changes to their site. All of it within 72 hours or face the loss of its legal protections. Within days of launching Grok’s new feature that allows users to create digital versions of themselves. And people began to create numerous altered digital images of women in sexually explicit situations without their consent. The outcry surrounding this issue has been dubbed the “Grok deepfake scandal.”
How Grok’s Image Edit Feature Became a Deepfake Tool
The article explains how Grok’s image editing capability has been manipulated. Additionally on how it has quickly turned from a creativity tool into a digital weapons of abuse. Users quickly took to using Grok as a means of uploading women’s photographs and providing text commands telling Grok to “remove clothing”, “make her explicit” etc. Grok fulfilled these wishes in mass quantity. Men created fake accounts, stolen photographs from women’s social media profiles, and produced sexualized versions of the women depicted in the photographs without their knowledge.
The most shocking aspect of the situation is that AI produced sexually explicit images of minors. On January 3, X acknowledged there were “failures in safeguards”, but unfortunately it was too late, reported Hindustan Times.
Grok was designed to be more liberal in its approach when compared to other companies such as ChatGPT, Claude. Over the summer months of 2025, X introduced a new paid subscription called “Spicy Mode”. Which allowed for the creation of adult-oriented material (including partial nudity). Although the company maintained that it would not produce images of actual people. That restriction was eliminated with the editing function.
What India’s 72-Hour Ultimatum Means for X

Only a day after MP Priyanka Chaturvedi filed an official complaint with India’s Ministry of Technology (on January 1). The Minister took immediate steps to resolve the matter by issuing a directive on January 2. Joint Secretary Ajit Kumar’s directive outlines five specific requirements to be fulfilled by X:
- A technical review of Grok’s entire operating system. (including the process by which Grok processes user prompts, produces output, and filters user-generated content).
- The immediate removal of any and all obscene, sexually explicit, or otherwise illegal content from the platform.
- Permanent banning of any user who produces illegal content (without warning).
- Direct accountability of X’s Chief Compliance Officer for certifying successful implementation of the changes. Which has been outlined in this directive, and for reporting all violations of the law committed through X.
- Submission of a report detailing the actions taken to the Ministry of Technology by 5 January.
This directive has been sent to the person in charge of X’s operations in India. As well as to several other governmental agencies. (Including the National Commission for Women, various child protective institutions, and the government of each state in India).
The Safe Harbor Immunity X Could Lose
The order of the Indian Government against X poses new dangers as it pertains to the way Indian law interacts with Social Media Platforms. Under Indian law, Section 79 of the Information Technology Act grants Social Media Platforms protection, in the form of “Safe Harbor,” from legal action for content posted by their users. This means parties cannot sue X for what its users post unless X fails to operate as an impartial host of user generated content and fails to take the requisite steps to maintain its user complaint and compliance reporting systems, including removing illegal content from the platform when so ordered by a court and appointing a Person or Persons In Charge of Regulatory Compliance.
If X does not comply with the directives of the Indian Government, the Government can strip X of Safe Harbor Protection. If Safe Harbor Protection is stripped from X, X can be held criminally liable and subject to both civil lawsuits from the victims of illegal content and regulatory penalties.
In addition to the risk of losing its Safe Harbor Protection, there is a further issue that may put X’s legal status outside the protections of Safe Harbor. Originally, Safe Harbor was designed for platforms that present (host) third-party content, while Grok does create manufacturing based on users posting content thru Grok. Contemporary legal experts suggest that X may be classified as a content generator vs a content host, which may prevent X from qualifying for Safe Harbor Protection in any capacity.
The threat of losing Safe Harbor in India creates a substantial risk for X, as India represents one of the largest markets for internet users of nearly 900 million in total.
Why This Isn’t Just an India Problem
A Variety of Countries have initiated an investigation into Grok’s ability to create Deep Fakes:
| Country | Authority | Action |
| European Union | European Commission | Opened “very serious” investigation under Digital Services Act |
| France | Public Prosecutor & Media Regulator | Criminal probe into child abuse material generation |
| UK | Ofcom | Urgent contact demanding compliance details |
| Malaysia | Communications Commission | Investigation launched, X summons planned |
Examples include France, which could impose fines of up to €90 Million under EU Law, and the United Kingdom’s media regulator, which has made contact to find out how X plans to respond and what actions they are taking at this time, reported The Economic Times.
The Global Response Indicates Changing attitudes by Regulators around the World Regarding AI-generated Content, moving towards requiring platforms to be Proactive in Addressing AI-generated Content Rather than simply Removing AI-generated Content Reactively.
What Happens If X Doesn’t Comply?
Until now, X has responded minimally. Grok released a vague, not-specific, and ambiguous statement regarding the “lapses in safeguards” on January 3rd. All of it without any indication of what specifically went wrong or how they plan on rectifying those issues. Musk also posted a tweet about the legal ramifications when users upload illegal content. Yet again he did not mention any preventative measures to prohibit their AIs from creating illegal material to begin with.
As of January 1st, X had removed a small number of minor-related images, but left an overwhelming number of adult women-related deepfakes still on display. The company has failed to turn off the edit feature or Spicy Mode. While they have not provided any specific change plan detailing what technical fixes they will implement on these issues.
The deadline to submit a report was January 5th within 72 hours. If X submits a report that is not adequate, or does not take reasonable steps to provide adequate safeguarding they will face the following consequences from India:
• Lose their safe harbor immunity immediately.
• Have their whole platform blocked from access within the country.
• Be held criminally liable through prosecution against X’s compliance officer personally.
• Establish a setting a precedent globally that encourages other countries to pursue the same type of enforcement actions.
A lot is at stake for social media websites due to the potential new wave of regulatory influence from AI features that generate harmful content. This pressure also intersects with how X is experimenting with AI driven features and platform engagement strategies.
The Road Ahead
In her complaint, MP Priyanka Chaturvedi articulated the issue succinctly: “Public and digital violations to a woman’s dignity cannot be ignored by our country as we do not take responsibility for any violations with zero accountability under the pretext of being creative or innovative.”
The affect of the grok deepfake demonstrates an inflection point with regards to how AI regulation needs to take place. No longer can platforms say they provide a neutral service when they are actively encouraging and/or creating abusive material. It is no longer a question of whether or not regulators will act; instead, it is now how far they will go.
In order for X to move forward, they will need to make major changes. And that will be beyond simply deleting inappropriate images from their systems after they have occurred. Specifically, it is critical to establish strong methods for detecting synthetic images. This is with regards to implement stricter filters to block inappropriate prompts. And ultimately, to either greatly restrict or eliminate any available features that allow for the generation of this type of content.
The ultimatum that India has given to X could potentially be just the first. As AI improves and proliferates, more governments will likely insist the platforms to do one thing. That is to demonstrate that their product is safe before use rather than wait until the damage has occurred.