India Finally Stands (Tentatively) Up to AI Deepfakes

India’s Move to Regulate AI Deepfakes

On 22 October 2025, India’s Ministry of Electronics and Information Technology (“MeitY”) published and invited comments on draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). The IT Rules are India’s online content regulatory framework, outlining due diligence, grievance, and content obligations for digital intermediaries, including social media platforms, online forums, and OTT news and streaming services.

These draft amendments are essentially India’s first attempt to regulate Artificial Intelligence (“AI”) generated content.

The short version: MeitY wants social media platforms and other online intermediaries to tag and embed traceable metadata or visible markers on AI-generated i.e. synthetic content, and take reasonable technical steps to detect and label it and control its dissemination

What do India’s proposed Deepfake Rules say?

  1. First, the draft amendments introduce a formal definition of “synthetically generated information” in Rule 2(1)(wa) – i.e. “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.”
  2.  The amendments also clarify, in Rule 2(1A), that any reference to “information” in the context of unlawful acts – including under Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4) of the IT Rules – shall include synthetically generated information.This means that intermediaries must now exercise the same level of due diligence in relation to synthetic content as ordinary content: no uploading or sharing any information that belongs to another person, or is obscene/pornographic, or harmful to a child, etc., or any unlawful information which is prohibited under any Indian law (see Rule 3(1)(b) and 3(1)(d)). Rule 4(2) would require significant social media intermediaries (“SSMIs”) to identify the first originator of the information, while Rule 4(4) would require them to deploy technology-based measures to identify unauthorised synthetic content.
  3. Section 79(2) of the Information Technology Act provides the conditions for an intermediary to claim immunity from liability for third-party content. The amendments propose a proviso to Rule 3(1)(b) that would extend such Section 79(2) protection to intermediaries that remove or disable access to synthetically generated information based on reasonable efforts or user grievances.
  4. Rule 3(3) of the draft amendments also require that Intermediaries that provide computer resources enabling the creation or modification of synthetically generated information must ensure that all such content is clearly labelled or embedded with a permanent, unique metadata tag or identifier. This label or identifier must be prominently displayed or made audible within the content, covering at least 10% of the visible surface area in the case of visual media, or for the initial 10% of the duration in the case of audio content – so that users can immediately identify it as synthetically generated. Importantly, this rule also prohibits intermediaries from altering, suppressing, or removing these labels or identifiers.
  5. Enhanced due diligence obligations have been introduced for SSMIs under a new Rule 4(1A), requiring them to obtain user declarations on whether uploaded content is synthetically generated, and deploy reasonable and proportionate technical measures to verify such declarations. They must also ensure that any synthetic content is clearly labelled or accompanied by a notice indicating its artificial nature.

This provision also states that if an intermediary knowingly permits, promotes, or fails to act upon such content, it will be deemed to have failed in exercising due diligence under this sub-rule. 

Deepfakes vs. Indian Courts

Deepfakes were being challenged in India’s courts before its legislature got to thinking about them. For example, in Anil Kapoor v. Simply Life India & Ors., the Delhi High Court granted an interim order restraining the unauthorised use of Kapoor’s name, likeness, voice and persona (including via AI-generated GIFs, deepfake videos and merchandise).  Legal experts and AI law firms in India are closely monitoring such cases to assess how emerging regulations will shape liability and compliance in the digital age.

More recently, in cases brought by Aishwarya Rai Bachchan and Abhishek Bachchan against YouTube and Google, the actors sought $450,000 in damages for the misuse of their images and voices in AI-generated deepfakes, claiming that these deepfakes violated their right to privacy and personality, as well as their publicity rights and copyright.

Interestingly, the Bachchans also asked the court to direct Google to establish safeguards to ensure that deepfake videos on YouTube were not used to train other AI platforms. They argued that YouTube’s content and third-party training policy perpetuates the proliferation of misleading content online, by allowing Google to employ user-generated videos to train rival AI models. 

Global Deepfake Regulation

India isn’t alone in wrestling with deepfakes, of course. Here’s an overview of how some other jurisdictions around the world are dealing or looking to deal with AI deepfakes:

United States:

Federal: The U.S. recently passed the federal Take It Down Act (which we’ve previously covered in this article) to criminalize non-consensual sharing of intimate imagery including AI-generated deepfakes. The law does not merely punish the initial perpetrators; it imposes obligations on online platforms to act when such content is flagged.

State laws: Astoundingly, virtually every U.S. state has either enacted or proposed laws to regulate deepfakes in relation to specific acts, such as intimate deepfakes, election interference or identity theft. California, e.g. has Assembly Bill 730, that outlaws deepfakes in political campaigns, and Assembly Bill 602 that criminalises non-consensual pornography.

Texas has Senate Bill 751 (now Tex. Elec. Code Ann. § 255.004) that criminalises the making and dissemination of deepfake videos intended to deceive, that seek to alter the electoral process (but only within 30 days of an election).

European Union:

The EU was among the first jurisdictions in the world to regulate AI, through its AI Act, which requires that AI-generated (especially deepfakes) be clearly disclosed as such.

China:

In China, new regulations effective from September 1 2025 have introduced a mandatory, standardised system for identifying synthetic content created using AI. The labelling may be visible, (e.g., a watermark on an image or a caption in a video indicating the content is synthetic) or invisible (e.g., an embedded digital signature or mark in the file’s metadata that can be detected by algorithms even if the visible label is removed). Online platforms have a responsibility to look for these watermarks, and seek clarifications on unmarked, suspicious content.

Denmark:

Denmark, famously, has proposed amendments to its copyright law to ensure that every person has “the right to their own body, facial features and voice”, thereby treating each person’s likeness as their intellectual property, and granting them the right to demand takedown of such content.

Well Begun, Only Half Done

India’s proposed amendments are a welcome first step towards deepfake regulation. By legally designating synthetic content as “information” under the IT Rules, the amendments extend the applicability of the current IT Rules to cover AI-generated content, thereby establishing that when such content is obscene or pornographic, infringes intellectual property rights, deceives or misleads users about the origin of a message, or impersonates another person, online platforms must deal with such ‘synthetic’ content in the same manner as any other content.

It’s an efficient approach, but also severely limited, in that it simply makes deepfakes a problem for online platforms to deal with.

While most deepfake laws, like India’s, place the burden of tagging and regulating deepfakes on online intermediaries, they generally don’t stop there: U.S. state laws such as those in California and Texas, criminalise deepfakes used in political manipulation or non-consensual sexual imagery. Similarly, South Korea’s laws directly prohibit and penalise deepfake pornography, China’s regulations impose identity verification requirements to prevent impersonation and fraud, and the United Kingdom’s Online Safety Act criminalises the creation as well as sharing of sexually explicit deepfake imagery and requires all websites to carry out “highly effective age assurance” to block underage users from accessing adult content.

Furthermore, requiring intermediaries to “make reasonable efforts” – as India’s proposed deepfake rules do – to restrict the spread of synthetic content does not amount to statutory prohibition, and leaves room for judicial dithering as to what constitutes reasonable. Likewise, the obligation for SSMIs to “deploy technology-based measures” remains somewhat ambiguous, offering little clarity on what tools or safeguards must actually be implemented to restrict unlawful information.

When it comes to labelling and traceability, global experience (such as the circulation of AI-generated videos during the 2024 European elections despite the AI Act’s disclosure requirement) shows that such requirements can be bypassed.

Deepfakes therefore call for a more comprehensive legal response, ideally through criminalisation or a dedicated framework that sets uniform standards for their creation, use, and distribution, covering issues like electoral manipulation (which the IT Rules currently do not directly address), non-consensual sexual or personal content, and other serious harms.

India has now stood up to deepfakes, and one hopes it will not promptly sit back down.

Authors: Shantanu Mukherjee, Varun Alase

Leave Us A Message

Cookie Consent with Real Cookie Banner