Like0

DEEPFAKE REGULATION IN INDIA: THE 3-HOUR TAKEDOWN RULE, CONSTITUTIONAL LIMITS, AND THE URGENT CASE FOR A DEDICATED LEGAL FRAMEWORK

DEEPFAKE REGULATION IN INDIA: THE 3-HOUR TAKEDOWN RULE, CONSTITUTIONAL LIMITS, AND THE URGENT CASE FOR A DEDICATED LEGAL FRAMEWORK

DEEPFAKE REGULATION IN INDIA: THE 3-HOUR TAKEDOWN RULE, CONSTITUTIONAL LIMITS, AND THE URGENT CASE FOR A DEDICATED LEGAL FRAMEWORK

DEEPFAKE REGULATION IN INDIA: THE 3-HOUR TAKEDOWN RULE, CONSTITUTIONAL LIMITS, AND THE URGENT CASE FOR A DEDICATED LEGAL FRAMEWORK

When Artificial Intelligence Becomes a Weapon: Understanding Deepfakes and Why India's Legal System Is Not Yet Equipped to Stop Them

Think of deepfake technology as the digital age's most sophisticated instrument of deception. Unlike a doctored photograph that a trained eye might detect, or a fabricated quote that can be traced to its source, a deepfake video or audio clip is designed to be indistinguishable from reality. It places a real person's face on a body that is not theirs. It makes a real person appear to say words they never spoke. It creates a visual and audio record of events that never happened, and it does so with a realism that makes the content convincing to the vast majority of viewers who encounter it.

The consequences of this technology in the wrong hands are severe and multidimensional. At the personal level, a deepfake can destroy a reputation built over decades, infringe upon a person's most intimate privacy, and cause psychological harm that no court order can fully undo once the content has spread. At the societal level, deepfakes can distort political discourse, undermine democratic processes by fabricating footage of leaders and candidates, and corrode the basic trust that citizens place in visual and audio evidence.

India is not exempt from these harms. As one of the world's largest and fastest-growing digital societies, India faces the deepfake challenge at enormous scale, but without a dedicated legislative framework capable of addressing it. This article examines the current state of deepfake regulation in India, the existing legal provisions that partially address the problem, the proposed 3-hour takedown rule and its constitutional implications, and the framework of reform that India urgently needs.

What Deepfakes Actually Are: The Technology, Its Capabilities, and Why It Is Legally Distinct from Ordinary Misinformation

Deepfakes are artificial media created using artificial intelligence tools that analyse existing audio-visual data and generate modified content in which a person is depicted saying or doing something they never said or did. The term derives from the deep learning algorithms that power the technology. The result is content that is factually false but perceptually authentic.

The legal and social significance of deepfakes lies precisely in this authenticity. Traditional misinformation, whether in the form of a fabricated text report or a selectively edited photograph, can be identified, questioned, and refuted. The human mind applies natural scepticism to written claims. But visual and audio content is processed differently. People are conditioned to believe what they see and hear, and the deepfake exploits that conditioning with devastating precision.

The table below sets out the key distinctions between deepfakes and traditional misinformation and the legal implications of each distinction.

Feature

Traditional Misinformation

Deepfake Content

Nature of falsity

Text, edited images, or selectively framed facts

Fabricated audio-visual content depicting real people

Detectability by ordinary viewer

Relatively easier to identify and question

Designed to be indistinguishable from genuine content

Speed of harm

Harm accumulates over time as false information spreads

Harm is immediate and intensified by the perceived authenticity of the content

Reversibility

Corrections and retractions can partially address the harm

Once viral, deepfake content is virtually impossible to fully retract or correct

Legal characterisation

May constitute defamation, fraud, or electoral malpractice

May constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on content

Applicable legal framework

Multiple existing provisions address traditional misinformation

No dedicated Indian statute; existing provisions apply partially and imperfectly

At the personal level, deepfakes may take the form of non-consensual explicit content that misappropriates a person's identity for sexual purposes, defamatory content that falsely depicts an individual committing crimes or making damaging statements, and harassment material designed to humiliate or intimidate. At the societal level, deepfake footage of political leaders making inflammatory statements or engaging in discrediting conduct poses a direct threat to electoral integrity and democratic discourse.

The Current Legal Landscape: How Existing Indian Law Partially Addresses Deepfakes

India does not have a dedicated statute addressing deepfake technology. Regulation is instead drawn from a patchwork of existing legislation governing intermediary liability, criminal conduct, and data protection. While these provisions offer some remedies, they are collectively insufficient to address the specific and novel challenges that deepfakes present.

The table below provides an overview of the principal existing legal provisions applicable to deepfakes and their limitations.

Legal Provision

Relevance to Deepfakes

Key Limitation

Section 79, IT Act 2000 (Safe harbour for intermediaries)

Intermediaries lose immunity if they fail to act after obtaining actual knowledge of unlawful content

No specified timeframe for deepfake removal; no definition of deepfake content

Rule 3(1)(b), IT Rules 2021 (Prohibited content categories)

Platforms must prohibit defamatory, obscene, privacy-infringing, and impersonation content

Deepfakes not expressly mentioned; application depends on case-by-case characterisation

Rule 3(2), IT Rules 2021 (24-hour removal obligation)

Platforms must remove specific categories of content within 24 hours of complaint

Applies to nudity and explicit content; no explicit coverage of non-explicit deepfakes

BNS 2023 (Defamation and obscenity provisions)

Deepfakes that defame or involve explicit content may attract criminal liability

Reactive remedy only; does not enable rapid content removal or prevention

DPDP Act, 2023 (Personal data protection)

A person's image or voice may constitute personal data; unauthorised use in deepfakes may attract liability

Focused on data processing compliance rather than content misuse; deepfake-specific application not yet defined

Article 21 (Right to privacy as fundamental right)

State has constitutional obligation to protect citizens against identity misuse through deepfakes

Provides constitutional basis for regulation rather than an operational remedy

Section 79 of the IT Act and the Safe Harbour Framework: What It Provides and Where It Falls Short

Section 79 of the Information Technology Act, 2000 is the foundational provision governing intermediary liability in India. It provides that an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted on its platform, provided that the intermediary does not initiate the transmission, does not select the receiver, and does not select or modify the information in the transmission. This safe harbour is conditional: if an intermediary receives actual knowledge that its platform is hosting unlawful content and fails to expeditiously remove it, the immunity is lost.

In the context of deepfakes, Section 79 becomes relevant when such content is hosted on social media platforms, video sharing services, or other digital intermediaries. However, the provision does not specify any timeframe within which removal must occur, does not define what constitutes actual knowledge in the context of deepfake-specific harms, and provides no guidance on the technical or procedural steps that platforms must take to identify deepfake content in the first instance.

The IT Rules, 2021 supplement Section 79 by introducing time-bound removal obligations. Rule 3(2) requires intermediaries to remove certain categories of content within 24 hours of receiving a complaint. These categories include content depicting nudity, sexual acts, and morphing of images. Deepfakes that fall within these categories may attract the 24-hour removal obligation. However, non-explicit deepfakes that cause severe reputational or political harm are not clearly covered, and the absence of a statutory definition of deepfake content means that classification disputes are inevitable.

The 3-Hour Takedown Rule: What It Proposes and Why Its Legal Status Remains Uncertain

The 3-hour takedown rule is a proposed policy intervention designed to ensure the rapid removal of harmful deepfake content before it can spread to a scale where the harm becomes irreversible. It is not currently codified in any Indian statute or regulation. Its proponents argue that the viral speed of digital content dissemination makes longer removal timelines inadequate: a deepfake video can be viewed by millions of people within hours of being uploaded, and a 24-hour or 48-hour removal obligation provides insufficient protection against this pace of harm.

The table below analyses the arguments for and against the 3-hour takedown rule from legal, practical, and constitutional perspectives.

Dimension

Arguments in Favour

Arguments Against

Harm prevention

Rapid removal contains the spread of deepfake content before it becomes viral; directly reduces the quantum of harm

Content cannot always be accurately identified as a deepfake within three hours; rapid removal of incorrectly identified content causes harm to legitimate creators

Constitutional validity

Restrictions on privacy-violating and defamatory content are permissible under Article 19(2); the state has a positive obligation to protect dignity under Article 21

May conflict with the principle in Shreya Singhal v. Union of India that intermediaries must act on court orders or government notifications rather than private complaints

Procedural fairness

Speed of action is essential where content causes immediate and severe harm

Content creators have no meaningful opportunity to be heard before their content is removed; natural justice principles are compromised

Platform feasibility

Major platforms have sufficient technical capacity to implement rapid removal mechanisms

Smaller platforms lack the detection tools and compliance resources to implement 3-hour removal reliably; creates unequal burden

Risk of censorship

Focused on genuinely harmful content such as explicit non-consensual material

Platforms may over-remove content to avoid liability; chilling effect on legitimate expression under Article 19(1)(a)

Misuse potential

Rapid response to verified harmful content

Bad actors can use complaint mechanisms to have legitimate content removed on a false deepfake characterisation

The Constitutional Framework: How Articles 19 and 21 Shape What Deepfake Regulation Can Legally Do

Any regulation of deepfake content in India must operate within the constitutional framework established by Part III of the Constitution. Two provisions are of central importance.

Article 19(1)(a) guarantees every citizen the right to freedom of speech and expression, including online expression. This right is not absolute. Article 19(2) permits the state to impose reasonable restrictions on this freedom in the interests of, among other things, the security of the state, public order, decency or morality, defamation, and contempt of court. Deepfake content that is defamatory, explicitly obscene, or likely to cause public disorder may legitimately be restricted under Article 19(2), provided the restriction is reasonable and proportionate.

Article 21 guarantees the right to life and personal liberty, which the Supreme Court has expansively interpreted to include the right to privacy, dignity, and personal reputation. The misappropriation of a person's identity through deepfake technology directly engages these rights. In K.S. Puttaswamy v. Union of India (2017), the Supreme Court held that privacy is a fundamental right and that the state has an obligation to protect individuals against privacy violations, including those arising from the misuse of personal data.

The table below sets out the constitutional framework applicable to deepfake regulation.

Constitutional Provision

Relevance to Deepfake Regulation

Key Principle

Article 19(1)(a)

Protects freedom of expression online, including satirical or artistic use of deepfake technology

Any restriction on deepfake content must be a reasonable restriction under Article 19(2) and must not suppress legitimate expression

Article 19(2)

Permits restrictions on speech that constitutes defamation, obscenity, or threat to public order

Provides constitutional basis for regulating harmful deepfakes that fall within these categories

Article 21

Protects privacy, dignity, and personal liberty against identity misuse through deepfakes

State has a positive obligation to protect citizens against deepfake-based privacy violations

Shreya Singhal v. Union of India (2015)

Established that intermediaries must act on court orders or government notifications, not private complaints

A 3-hour takedown regime based on private complaints may not satisfy this standard without adequate judicial or governmental oversight

K.S. Puttaswamy v. Union of India (2017)

Established privacy as a fundamental right; any interference must satisfy legality, necessity, and proportionality

Deepfake regulation must itself satisfy these tests; overbroad regulation may violate the right to expression

The Practical Challenges: Why Even a Well-Designed Takedown Rule Faces Serious Implementation Problems

Even setting aside the constitutional questions, a rapid deepfake takedown regime faces substantial practical challenges that must be addressed for any regulatory framework to be effective.

The table below sets out the principal practical challenges and potential approaches to addressing each.

Challenge

Nature of the Problem

Potential Approach

Detection accuracy

Deepfake identification is technically complex; automated systems produce false positives and false negatives; human verification is time-consuming

Investment in AI-assisted detection tools; requirement that platforms invest in detection capacity proportionate to their size and reach

Inconsistency across platforms

Different platforms have different content moderation policies and technical capabilities; without uniform standards, enforcement is uneven

Statutory minimum standards for deepfake detection and removal applicable to all intermediaries above a defined threshold

Misuse of complaint mechanisms

Bad actors can file false complaints to have legitimate content removed; urgent timelines reduce the opportunity for meaningful review

Penalties for false or malicious complaints; requirement of sworn declaration accompanying complaints about deepfake content

Resource disparity between large and small platforms

Major platforms have advanced detection systems and large compliance teams; smaller platforms do not

Tiered obligations proportionate to platform size and capacity; regulatory support for smaller platforms in building detection capability

Cross-border content

Deepfake content may be hosted on servers outside India, placing it beyond the immediate reach of domestic regulation

International regulatory cooperation; blocking orders for content hosted on non-compliant foreign platforms

Irreversibility of spread

Even where content is removed quickly, it may already have been downloaded, shared, and re-uploaded

Focus on speed of initial removal combined with obligations to prevent re-upload of identified deepfake content

What a Coherent Deepfake Regulatory Framework for India Should Look Like

The absence of a dedicated deepfake statute in India is a significant gap in the legal framework. The following table sets out the key elements that a comprehensive deepfake regulatory framework should include, drawing on the constitutional requirements, the existing legal architecture, and the practical challenges identified above.

Element

Description

Constitutional and Practical Basis

Statutory definition of deepfakes

Clear legal definition distinguishing harmful deepfakes from legitimate uses including satire, parody, and artistic expression

Provides legal certainty; prevents over-application of regulatory provisions to protected expression

Tiered takedown timelines

Faster timelines for non-consensual explicit content; longer timelines with verification requirements for other categories

Proportionate approach consistent with the reasonableness requirement under Article 19(2)

Procedural safeguards

Mandatory notification to content creator; opportunity to respond before removal is permanent; appeal mechanism

Satisfies natural justice requirements; prevents arbitrary censorship

Judicial or government oversight

Removal obligations triggered by court orders, government notifications, or a designated regulatory authority rather than private complaints alone

Consistent with Shreya Singhal principles; prevents private censorship

Detection technology investment

State and intermediary cooperation in developing and deploying accurate deepfake detection tools

Addresses the technical feasibility challenge; reduces risk of false positive removals

Platform accountability

Civil and criminal liability for platforms that fail to comply with removal obligations after acquiring actual knowledge

Strengthens enforcement; provides deterrence against non-compliance

Criminal liability for creators

Specific criminal offence for the creation and distribution of harmful deepfakes

Addresses the gap in the BNS 2023 which does not specifically criminalise deepfake creation

Public awareness programme

State-funded education initiative on deepfake identification and the risks of sharing unverified content

Addresses the demand side of the deepfake problem; reduces circulation through informed consumption

Conclusion: India Needs a Dedicated Deepfake Law That Balances Protection with Constitutional Fidelity

Deepfake technology represents a qualitatively new form of digital harm that India's existing legal framework is not equipped to fully address. The Information Technology Act, the IT Rules of 2021, the Bharatiya Nyaya Sanhita, and the Digital Personal Data Protection Act each provide partial remedies, but none of them was designed with deepfakes in mind, and none of them addresses the specific combination of technical sophistication, rapid dissemination, and severe personal and societal harm that characterises deepfake content.

The 3-hour takedown rule reflects a genuine and urgent concern about the pace at which digital harm spreads. But urgent concern does not validate unconstitutional implementation. A takedown regime that bypasses judicial oversight, eliminates procedural safeguards, and compels platforms to act on private complaints without independent verification risks producing a system of private censorship that causes more harm to legitimate expression than it prevents from deepfake technology.

India needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework: one that defines the harm clearly, establishes proportionate and tiered responses, maintains procedural fairness for content creators, preserves the role of courts and government in authorising removal, and invests in the technical capacity necessary for accurate and efficient enforcement. The balance between privacy, dignity, and free expression is not an obstacle to deepfake regulation. It is the constitutional requirement that makes effective and legitimate regulation possible.

Frequently Asked Questions (FAQs) on Deepfake Regulation and the 3-Hour Takedown Rule in India

  1. What is a deepfake and why is it legally significant? A deepfake is artificial audio-visual content created using AI tools that make a real person appear to say or do something they never said or did. It is legally significant because it may constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on its content and distribution.


  2. Is there a specific law in India that addresses deepfakes? No. India currently has no dedicated deepfake legislation. Regulation is drawn from the IT Act 2000, IT Rules 2021, BNS 2023, and DPDP Act 2023, none of which specifically addresses deepfake technology or provides a comprehensive regulatory framework for it.


  3. What does Section 79 of the IT Act provide in relation to deepfakes? Section 79 provides a safe harbour to intermediaries from liability for third-party content, conditional on the intermediary not having actual knowledge of the unlawful content and taking expeditious action upon receiving such knowledge. It does not specify timelines for deepfake removal or define deepfake content.


  4. What is the 3-hour takedown rule and is it currently law in India? The 3-hour takedown rule is a proposed policy concept that would require platforms to remove harmful deepfake content within three hours of a complaint. It is not currently enacted in any Indian statute or regulation and its constitutional validity remains untested.


  5. Does the Shreya Singhal judgment affect the 3-hour takedown rule? Yes. The Supreme Court in Shreya Singhal v. Union of India held that intermediaries are required to act on court orders or government notifications rather than private complaints. A takedown regime based on private complaints without judicial or governmental oversight may not satisfy this standard.


  6. How does the right to privacy under Article 21 apply to deepfakes? The Supreme Court in K.S. Puttaswamy v. Union of India established privacy as a fundamental right. Deepfakes that misappropriate a person's identity directly violate this right, creating a constitutional obligation on the state to protect individuals against such harm through appropriate legislation and enforcement.


  7. What are the constitutional risks of a strict rapid takedown rule? A strict rapid takedown rule may create a chilling effect on freedom of expression under Article 19(1)(a), enable private censorship by shifting regulatory power from courts to platforms, compromise natural justice by removing content without notice or opportunity to respond, and bypass the judicial oversight framework established in Shreya Singhal.


  8. What should a comprehensive Indian deepfake law include? A comprehensive framework should include a statutory definition of deepfakes, tiered takedown timelines proportionate to the severity of harm, procedural safeguards including creator notification and appeal mechanisms, judicial or governmental oversight of removal decisions, investment in detection technology, criminal liability for deepfake creators, and public awareness programmes.


Key Takeaways: Everything You Must Know About Deepfake Regulation and the Legal Framework in India

Deepfakes are AI-generated audio-visual content that make real people appear to say or do things they never did, causing personal harm through reputation damage and privacy violation and societal harm through disinformation and electoral interference.

India has no dedicated deepfake legislation; existing regulation draws on Section 79 of the IT Act 2000, the IT Rules 2021, the BNS 2023, and the DPDP Act 2023, each of which provides only partial coverage.

The IT Rules 2021 impose a 24-hour removal obligation for explicit and morphed content under Rule 3(2), but deepfakes are not expressly defined or comprehensively covered in the Rules.

The 3-hour takedown rule is a proposed policy concept, not current law, designed to ensure rapid removal of harmful content before it goes viral; its constitutional validity has not been judicially determined.

The Shreya Singhal v. Union of India judgment requires that intermediary removal obligations be triggered by court orders or government notifications rather than private complaints, creating a constitutional constraint on rapid takedown regimes based on private complaint systems.

The right to privacy as a fundamental right under Article 21, established in K.S. Puttaswamy v. Union of India, creates a constitutional obligation on the state to protect citizens against identity misuse through deepfakes.

Practical challenges to effective deepfake regulation include detection accuracy limitations, inconsistency across platforms, misuse of complaint mechanisms, resource disparities between large and small platforms, and cross-border content hosting.

A proportionate regulatory framework requires a statutory definition of deepfakes, tiered removal timelines, procedural safeguards, judicial or government oversight, detection technology investment, and criminal liability for creators.

Any restriction on deepfake content must satisfy the reasonableness requirement under Article 19(2) and the proportionality standard established in K.S. Puttaswamy; overbroad regulation may itself violate constitutional rights.

India urgently needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework that balances the protection of personal dignity and privacy against the constitutional guarantee of freedom of expression.

References

The Information Technology Act, 2000: The primary legislation governing intermediary liability in India, containing Section 79 on safe harbour protection and the conditions under which it is lost when intermediaries fail to act on actual knowledge of unlawful content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: The subordinate legislation imposing content moderation obligations on intermediaries, including the 24-hour removal requirement for specific categories of harmful content under Rule 3(2).

The Bharatiya Nyaya Sanhita, 2023: The successor to the Indian Penal Code, containing provisions on defamation and obscenity that are applicable to certain categories of deepfake content but do not specifically address deepfake technology.

The Digital Personal Data Protection Act, 2023: The legislation governing the processing of personal data in India, potentially applicable to the unauthorised use of a person's image or voice in deepfake creation but not specifically designed to address deepfake content misuse.

The Constitution of India, 1950: The foundational document containing Articles 19(1)(a) and 19(2) governing freedom of expression and its permissible restrictions, and Article 21 governing the right to life, personal liberty, privacy, and dignity.

Shreya Singhal v. Union of India, (2015) 5 SCC 1: The Supreme Court decision establishing that intermediaries must act on court orders or government notifications rather than private complaints, with direct implications for the constitutional validity of rapid complaint-based takedown regimes.

K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1: The nine-judge bench decision establishing privacy as a fundamental right under Article 21 and requiring that any interference with privacy satisfy the tests of legality, necessity, and proportionality.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.




When Artificial Intelligence Becomes a Weapon: Understanding Deepfakes and Why India's Legal System Is Not Yet Equipped to Stop Them

Think of deepfake technology as the digital age's most sophisticated instrument of deception. Unlike a doctored photograph that a trained eye might detect, or a fabricated quote that can be traced to its source, a deepfake video or audio clip is designed to be indistinguishable from reality. It places a real person's face on a body that is not theirs. It makes a real person appear to say words they never spoke. It creates a visual and audio record of events that never happened, and it does so with a realism that makes the content convincing to the vast majority of viewers who encounter it.

The consequences of this technology in the wrong hands are severe and multidimensional. At the personal level, a deepfake can destroy a reputation built over decades, infringe upon a person's most intimate privacy, and cause psychological harm that no court order can fully undo once the content has spread. At the societal level, deepfakes can distort political discourse, undermine democratic processes by fabricating footage of leaders and candidates, and corrode the basic trust that citizens place in visual and audio evidence.

India is not exempt from these harms. As one of the world's largest and fastest-growing digital societies, India faces the deepfake challenge at enormous scale, but without a dedicated legislative framework capable of addressing it. This article examines the current state of deepfake regulation in India, the existing legal provisions that partially address the problem, the proposed 3-hour takedown rule and its constitutional implications, and the framework of reform that India urgently needs.

What Deepfakes Actually Are: The Technology, Its Capabilities, and Why It Is Legally Distinct from Ordinary Misinformation

Deepfakes are artificial media created using artificial intelligence tools that analyse existing audio-visual data and generate modified content in which a person is depicted saying or doing something they never said or did. The term derives from the deep learning algorithms that power the technology. The result is content that is factually false but perceptually authentic.

The legal and social significance of deepfakes lies precisely in this authenticity. Traditional misinformation, whether in the form of a fabricated text report or a selectively edited photograph, can be identified, questioned, and refuted. The human mind applies natural scepticism to written claims. But visual and audio content is processed differently. People are conditioned to believe what they see and hear, and the deepfake exploits that conditioning with devastating precision.

The table below sets out the key distinctions between deepfakes and traditional misinformation and the legal implications of each distinction.

Feature

Traditional Misinformation

Deepfake Content

Nature of falsity

Text, edited images, or selectively framed facts

Fabricated audio-visual content depicting real people

Detectability by ordinary viewer

Relatively easier to identify and question

Designed to be indistinguishable from genuine content

Speed of harm

Harm accumulates over time as false information spreads

Harm is immediate and intensified by the perceived authenticity of the content

Reversibility

Corrections and retractions can partially address the harm

Once viral, deepfake content is virtually impossible to fully retract or correct

Legal characterisation

May constitute defamation, fraud, or electoral malpractice

May constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on content

Applicable legal framework

Multiple existing provisions address traditional misinformation

No dedicated Indian statute; existing provisions apply partially and imperfectly

At the personal level, deepfakes may take the form of non-consensual explicit content that misappropriates a person's identity for sexual purposes, defamatory content that falsely depicts an individual committing crimes or making damaging statements, and harassment material designed to humiliate or intimidate. At the societal level, deepfake footage of political leaders making inflammatory statements or engaging in discrediting conduct poses a direct threat to electoral integrity and democratic discourse.

The Current Legal Landscape: How Existing Indian Law Partially Addresses Deepfakes

India does not have a dedicated statute addressing deepfake technology. Regulation is instead drawn from a patchwork of existing legislation governing intermediary liability, criminal conduct, and data protection. While these provisions offer some remedies, they are collectively insufficient to address the specific and novel challenges that deepfakes present.

The table below provides an overview of the principal existing legal provisions applicable to deepfakes and their limitations.

Legal Provision

Relevance to Deepfakes

Key Limitation

Section 79, IT Act 2000 (Safe harbour for intermediaries)

Intermediaries lose immunity if they fail to act after obtaining actual knowledge of unlawful content

No specified timeframe for deepfake removal; no definition of deepfake content

Rule 3(1)(b), IT Rules 2021 (Prohibited content categories)

Platforms must prohibit defamatory, obscene, privacy-infringing, and impersonation content

Deepfakes not expressly mentioned; application depends on case-by-case characterisation

Rule 3(2), IT Rules 2021 (24-hour removal obligation)

Platforms must remove specific categories of content within 24 hours of complaint

Applies to nudity and explicit content; no explicit coverage of non-explicit deepfakes

BNS 2023 (Defamation and obscenity provisions)

Deepfakes that defame or involve explicit content may attract criminal liability

Reactive remedy only; does not enable rapid content removal or prevention

DPDP Act, 2023 (Personal data protection)

A person's image or voice may constitute personal data; unauthorised use in deepfakes may attract liability

Focused on data processing compliance rather than content misuse; deepfake-specific application not yet defined

Article 21 (Right to privacy as fundamental right)

State has constitutional obligation to protect citizens against identity misuse through deepfakes

Provides constitutional basis for regulation rather than an operational remedy

Section 79 of the IT Act and the Safe Harbour Framework: What It Provides and Where It Falls Short

Section 79 of the Information Technology Act, 2000 is the foundational provision governing intermediary liability in India. It provides that an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted on its platform, provided that the intermediary does not initiate the transmission, does not select the receiver, and does not select or modify the information in the transmission. This safe harbour is conditional: if an intermediary receives actual knowledge that its platform is hosting unlawful content and fails to expeditiously remove it, the immunity is lost.

In the context of deepfakes, Section 79 becomes relevant when such content is hosted on social media platforms, video sharing services, or other digital intermediaries. However, the provision does not specify any timeframe within which removal must occur, does not define what constitutes actual knowledge in the context of deepfake-specific harms, and provides no guidance on the technical or procedural steps that platforms must take to identify deepfake content in the first instance.

The IT Rules, 2021 supplement Section 79 by introducing time-bound removal obligations. Rule 3(2) requires intermediaries to remove certain categories of content within 24 hours of receiving a complaint. These categories include content depicting nudity, sexual acts, and morphing of images. Deepfakes that fall within these categories may attract the 24-hour removal obligation. However, non-explicit deepfakes that cause severe reputational or political harm are not clearly covered, and the absence of a statutory definition of deepfake content means that classification disputes are inevitable.

The 3-Hour Takedown Rule: What It Proposes and Why Its Legal Status Remains Uncertain

The 3-hour takedown rule is a proposed policy intervention designed to ensure the rapid removal of harmful deepfake content before it can spread to a scale where the harm becomes irreversible. It is not currently codified in any Indian statute or regulation. Its proponents argue that the viral speed of digital content dissemination makes longer removal timelines inadequate: a deepfake video can be viewed by millions of people within hours of being uploaded, and a 24-hour or 48-hour removal obligation provides insufficient protection against this pace of harm.

The table below analyses the arguments for and against the 3-hour takedown rule from legal, practical, and constitutional perspectives.

Dimension

Arguments in Favour

Arguments Against

Harm prevention

Rapid removal contains the spread of deepfake content before it becomes viral; directly reduces the quantum of harm

Content cannot always be accurately identified as a deepfake within three hours; rapid removal of incorrectly identified content causes harm to legitimate creators

Constitutional validity

Restrictions on privacy-violating and defamatory content are permissible under Article 19(2); the state has a positive obligation to protect dignity under Article 21

May conflict with the principle in Shreya Singhal v. Union of India that intermediaries must act on court orders or government notifications rather than private complaints

Procedural fairness

Speed of action is essential where content causes immediate and severe harm

Content creators have no meaningful opportunity to be heard before their content is removed; natural justice principles are compromised

Platform feasibility

Major platforms have sufficient technical capacity to implement rapid removal mechanisms

Smaller platforms lack the detection tools and compliance resources to implement 3-hour removal reliably; creates unequal burden

Risk of censorship

Focused on genuinely harmful content such as explicit non-consensual material

Platforms may over-remove content to avoid liability; chilling effect on legitimate expression under Article 19(1)(a)

Misuse potential

Rapid response to verified harmful content

Bad actors can use complaint mechanisms to have legitimate content removed on a false deepfake characterisation

The Constitutional Framework: How Articles 19 and 21 Shape What Deepfake Regulation Can Legally Do

Any regulation of deepfake content in India must operate within the constitutional framework established by Part III of the Constitution. Two provisions are of central importance.

Article 19(1)(a) guarantees every citizen the right to freedom of speech and expression, including online expression. This right is not absolute. Article 19(2) permits the state to impose reasonable restrictions on this freedom in the interests of, among other things, the security of the state, public order, decency or morality, defamation, and contempt of court. Deepfake content that is defamatory, explicitly obscene, or likely to cause public disorder may legitimately be restricted under Article 19(2), provided the restriction is reasonable and proportionate.

Article 21 guarantees the right to life and personal liberty, which the Supreme Court has expansively interpreted to include the right to privacy, dignity, and personal reputation. The misappropriation of a person's identity through deepfake technology directly engages these rights. In K.S. Puttaswamy v. Union of India (2017), the Supreme Court held that privacy is a fundamental right and that the state has an obligation to protect individuals against privacy violations, including those arising from the misuse of personal data.

The table below sets out the constitutional framework applicable to deepfake regulation.

Constitutional Provision

Relevance to Deepfake Regulation

Key Principle

Article 19(1)(a)

Protects freedom of expression online, including satirical or artistic use of deepfake technology

Any restriction on deepfake content must be a reasonable restriction under Article 19(2) and must not suppress legitimate expression

Article 19(2)

Permits restrictions on speech that constitutes defamation, obscenity, or threat to public order

Provides constitutional basis for regulating harmful deepfakes that fall within these categories

Article 21

Protects privacy, dignity, and personal liberty against identity misuse through deepfakes

State has a positive obligation to protect citizens against deepfake-based privacy violations

Shreya Singhal v. Union of India (2015)

Established that intermediaries must act on court orders or government notifications, not private complaints

A 3-hour takedown regime based on private complaints may not satisfy this standard without adequate judicial or governmental oversight

K.S. Puttaswamy v. Union of India (2017)

Established privacy as a fundamental right; any interference must satisfy legality, necessity, and proportionality

Deepfake regulation must itself satisfy these tests; overbroad regulation may violate the right to expression

The Practical Challenges: Why Even a Well-Designed Takedown Rule Faces Serious Implementation Problems

Even setting aside the constitutional questions, a rapid deepfake takedown regime faces substantial practical challenges that must be addressed for any regulatory framework to be effective.

The table below sets out the principal practical challenges and potential approaches to addressing each.

Challenge

Nature of the Problem

Potential Approach

Detection accuracy

Deepfake identification is technically complex; automated systems produce false positives and false negatives; human verification is time-consuming

Investment in AI-assisted detection tools; requirement that platforms invest in detection capacity proportionate to their size and reach

Inconsistency across platforms

Different platforms have different content moderation policies and technical capabilities; without uniform standards, enforcement is uneven

Statutory minimum standards for deepfake detection and removal applicable to all intermediaries above a defined threshold

Misuse of complaint mechanisms

Bad actors can file false complaints to have legitimate content removed; urgent timelines reduce the opportunity for meaningful review

Penalties for false or malicious complaints; requirement of sworn declaration accompanying complaints about deepfake content

Resource disparity between large and small platforms

Major platforms have advanced detection systems and large compliance teams; smaller platforms do not

Tiered obligations proportionate to platform size and capacity; regulatory support for smaller platforms in building detection capability

Cross-border content

Deepfake content may be hosted on servers outside India, placing it beyond the immediate reach of domestic regulation

International regulatory cooperation; blocking orders for content hosted on non-compliant foreign platforms

Irreversibility of spread

Even where content is removed quickly, it may already have been downloaded, shared, and re-uploaded

Focus on speed of initial removal combined with obligations to prevent re-upload of identified deepfake content

What a Coherent Deepfake Regulatory Framework for India Should Look Like

The absence of a dedicated deepfake statute in India is a significant gap in the legal framework. The following table sets out the key elements that a comprehensive deepfake regulatory framework should include, drawing on the constitutional requirements, the existing legal architecture, and the practical challenges identified above.

Element

Description

Constitutional and Practical Basis

Statutory definition of deepfakes

Clear legal definition distinguishing harmful deepfakes from legitimate uses including satire, parody, and artistic expression

Provides legal certainty; prevents over-application of regulatory provisions to protected expression

Tiered takedown timelines

Faster timelines for non-consensual explicit content; longer timelines with verification requirements for other categories

Proportionate approach consistent with the reasonableness requirement under Article 19(2)

Procedural safeguards

Mandatory notification to content creator; opportunity to respond before removal is permanent; appeal mechanism

Satisfies natural justice requirements; prevents arbitrary censorship

Judicial or government oversight

Removal obligations triggered by court orders, government notifications, or a designated regulatory authority rather than private complaints alone

Consistent with Shreya Singhal principles; prevents private censorship

Detection technology investment

State and intermediary cooperation in developing and deploying accurate deepfake detection tools

Addresses the technical feasibility challenge; reduces risk of false positive removals

Platform accountability

Civil and criminal liability for platforms that fail to comply with removal obligations after acquiring actual knowledge

Strengthens enforcement; provides deterrence against non-compliance

Criminal liability for creators

Specific criminal offence for the creation and distribution of harmful deepfakes

Addresses the gap in the BNS 2023 which does not specifically criminalise deepfake creation

Public awareness programme

State-funded education initiative on deepfake identification and the risks of sharing unverified content

Addresses the demand side of the deepfake problem; reduces circulation through informed consumption

Conclusion: India Needs a Dedicated Deepfake Law That Balances Protection with Constitutional Fidelity

Deepfake technology represents a qualitatively new form of digital harm that India's existing legal framework is not equipped to fully address. The Information Technology Act, the IT Rules of 2021, the Bharatiya Nyaya Sanhita, and the Digital Personal Data Protection Act each provide partial remedies, but none of them was designed with deepfakes in mind, and none of them addresses the specific combination of technical sophistication, rapid dissemination, and severe personal and societal harm that characterises deepfake content.

The 3-hour takedown rule reflects a genuine and urgent concern about the pace at which digital harm spreads. But urgent concern does not validate unconstitutional implementation. A takedown regime that bypasses judicial oversight, eliminates procedural safeguards, and compels platforms to act on private complaints without independent verification risks producing a system of private censorship that causes more harm to legitimate expression than it prevents from deepfake technology.

India needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework: one that defines the harm clearly, establishes proportionate and tiered responses, maintains procedural fairness for content creators, preserves the role of courts and government in authorising removal, and invests in the technical capacity necessary for accurate and efficient enforcement. The balance between privacy, dignity, and free expression is not an obstacle to deepfake regulation. It is the constitutional requirement that makes effective and legitimate regulation possible.

Frequently Asked Questions (FAQs) on Deepfake Regulation and the 3-Hour Takedown Rule in India

  1. What is a deepfake and why is it legally significant? A deepfake is artificial audio-visual content created using AI tools that make a real person appear to say or do something they never said or did. It is legally significant because it may constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on its content and distribution.


  2. Is there a specific law in India that addresses deepfakes? No. India currently has no dedicated deepfake legislation. Regulation is drawn from the IT Act 2000, IT Rules 2021, BNS 2023, and DPDP Act 2023, none of which specifically addresses deepfake technology or provides a comprehensive regulatory framework for it.


  3. What does Section 79 of the IT Act provide in relation to deepfakes? Section 79 provides a safe harbour to intermediaries from liability for third-party content, conditional on the intermediary not having actual knowledge of the unlawful content and taking expeditious action upon receiving such knowledge. It does not specify timelines for deepfake removal or define deepfake content.


  4. What is the 3-hour takedown rule and is it currently law in India? The 3-hour takedown rule is a proposed policy concept that would require platforms to remove harmful deepfake content within three hours of a complaint. It is not currently enacted in any Indian statute or regulation and its constitutional validity remains untested.


  5. Does the Shreya Singhal judgment affect the 3-hour takedown rule? Yes. The Supreme Court in Shreya Singhal v. Union of India held that intermediaries are required to act on court orders or government notifications rather than private complaints. A takedown regime based on private complaints without judicial or governmental oversight may not satisfy this standard.


  6. How does the right to privacy under Article 21 apply to deepfakes? The Supreme Court in K.S. Puttaswamy v. Union of India established privacy as a fundamental right. Deepfakes that misappropriate a person's identity directly violate this right, creating a constitutional obligation on the state to protect individuals against such harm through appropriate legislation and enforcement.


  7. What are the constitutional risks of a strict rapid takedown rule? A strict rapid takedown rule may create a chilling effect on freedom of expression under Article 19(1)(a), enable private censorship by shifting regulatory power from courts to platforms, compromise natural justice by removing content without notice or opportunity to respond, and bypass the judicial oversight framework established in Shreya Singhal.


  8. What should a comprehensive Indian deepfake law include? A comprehensive framework should include a statutory definition of deepfakes, tiered takedown timelines proportionate to the severity of harm, procedural safeguards including creator notification and appeal mechanisms, judicial or governmental oversight of removal decisions, investment in detection technology, criminal liability for deepfake creators, and public awareness programmes.


Key Takeaways: Everything You Must Know About Deepfake Regulation and the Legal Framework in India

Deepfakes are AI-generated audio-visual content that make real people appear to say or do things they never did, causing personal harm through reputation damage and privacy violation and societal harm through disinformation and electoral interference.

India has no dedicated deepfake legislation; existing regulation draws on Section 79 of the IT Act 2000, the IT Rules 2021, the BNS 2023, and the DPDP Act 2023, each of which provides only partial coverage.

The IT Rules 2021 impose a 24-hour removal obligation for explicit and morphed content under Rule 3(2), but deepfakes are not expressly defined or comprehensively covered in the Rules.

The 3-hour takedown rule is a proposed policy concept, not current law, designed to ensure rapid removal of harmful content before it goes viral; its constitutional validity has not been judicially determined.

The Shreya Singhal v. Union of India judgment requires that intermediary removal obligations be triggered by court orders or government notifications rather than private complaints, creating a constitutional constraint on rapid takedown regimes based on private complaint systems.

The right to privacy as a fundamental right under Article 21, established in K.S. Puttaswamy v. Union of India, creates a constitutional obligation on the state to protect citizens against identity misuse through deepfakes.

Practical challenges to effective deepfake regulation include detection accuracy limitations, inconsistency across platforms, misuse of complaint mechanisms, resource disparities between large and small platforms, and cross-border content hosting.

A proportionate regulatory framework requires a statutory definition of deepfakes, tiered removal timelines, procedural safeguards, judicial or government oversight, detection technology investment, and criminal liability for creators.

Any restriction on deepfake content must satisfy the reasonableness requirement under Article 19(2) and the proportionality standard established in K.S. Puttaswamy; overbroad regulation may itself violate constitutional rights.

India urgently needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework that balances the protection of personal dignity and privacy against the constitutional guarantee of freedom of expression.

References

The Information Technology Act, 2000: The primary legislation governing intermediary liability in India, containing Section 79 on safe harbour protection and the conditions under which it is lost when intermediaries fail to act on actual knowledge of unlawful content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: The subordinate legislation imposing content moderation obligations on intermediaries, including the 24-hour removal requirement for specific categories of harmful content under Rule 3(2).

The Bharatiya Nyaya Sanhita, 2023: The successor to the Indian Penal Code, containing provisions on defamation and obscenity that are applicable to certain categories of deepfake content but do not specifically address deepfake technology.

The Digital Personal Data Protection Act, 2023: The legislation governing the processing of personal data in India, potentially applicable to the unauthorised use of a person's image or voice in deepfake creation but not specifically designed to address deepfake content misuse.

The Constitution of India, 1950: The foundational document containing Articles 19(1)(a) and 19(2) governing freedom of expression and its permissible restrictions, and Article 21 governing the right to life, personal liberty, privacy, and dignity.

Shreya Singhal v. Union of India, (2015) 5 SCC 1: The Supreme Court decision establishing that intermediaries must act on court orders or government notifications rather than private complaints, with direct implications for the constitutional validity of rapid complaint-based takedown regimes.

K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1: The nine-judge bench decision establishing privacy as a fundamental right under Article 21 and requiring that any interference with privacy satisfy the tests of legality, necessity, and proportionality.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.




When Artificial Intelligence Becomes a Weapon: Understanding Deepfakes and Why India's Legal System Is Not Yet Equipped to Stop Them

Think of deepfake technology as the digital age's most sophisticated instrument of deception. Unlike a doctored photograph that a trained eye might detect, or a fabricated quote that can be traced to its source, a deepfake video or audio clip is designed to be indistinguishable from reality. It places a real person's face on a body that is not theirs. It makes a real person appear to say words they never spoke. It creates a visual and audio record of events that never happened, and it does so with a realism that makes the content convincing to the vast majority of viewers who encounter it.

The consequences of this technology in the wrong hands are severe and multidimensional. At the personal level, a deepfake can destroy a reputation built over decades, infringe upon a person's most intimate privacy, and cause psychological harm that no court order can fully undo once the content has spread. At the societal level, deepfakes can distort political discourse, undermine democratic processes by fabricating footage of leaders and candidates, and corrode the basic trust that citizens place in visual and audio evidence.

India is not exempt from these harms. As one of the world's largest and fastest-growing digital societies, India faces the deepfake challenge at enormous scale, but without a dedicated legislative framework capable of addressing it. This article examines the current state of deepfake regulation in India, the existing legal provisions that partially address the problem, the proposed 3-hour takedown rule and its constitutional implications, and the framework of reform that India urgently needs.

What Deepfakes Actually Are: The Technology, Its Capabilities, and Why It Is Legally Distinct from Ordinary Misinformation

Deepfakes are artificial media created using artificial intelligence tools that analyse existing audio-visual data and generate modified content in which a person is depicted saying or doing something they never said or did. The term derives from the deep learning algorithms that power the technology. The result is content that is factually false but perceptually authentic.

The legal and social significance of deepfakes lies precisely in this authenticity. Traditional misinformation, whether in the form of a fabricated text report or a selectively edited photograph, can be identified, questioned, and refuted. The human mind applies natural scepticism to written claims. But visual and audio content is processed differently. People are conditioned to believe what they see and hear, and the deepfake exploits that conditioning with devastating precision.

The table below sets out the key distinctions between deepfakes and traditional misinformation and the legal implications of each distinction.

Feature

Traditional Misinformation

Deepfake Content

Nature of falsity

Text, edited images, or selectively framed facts

Fabricated audio-visual content depicting real people

Detectability by ordinary viewer

Relatively easier to identify and question

Designed to be indistinguishable from genuine content

Speed of harm

Harm accumulates over time as false information spreads

Harm is immediate and intensified by the perceived authenticity of the content

Reversibility

Corrections and retractions can partially address the harm

Once viral, deepfake content is virtually impossible to fully retract or correct

Legal characterisation

May constitute defamation, fraud, or electoral malpractice

May constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on content

Applicable legal framework

Multiple existing provisions address traditional misinformation

No dedicated Indian statute; existing provisions apply partially and imperfectly

At the personal level, deepfakes may take the form of non-consensual explicit content that misappropriates a person's identity for sexual purposes, defamatory content that falsely depicts an individual committing crimes or making damaging statements, and harassment material designed to humiliate or intimidate. At the societal level, deepfake footage of political leaders making inflammatory statements or engaging in discrediting conduct poses a direct threat to electoral integrity and democratic discourse.

The Current Legal Landscape: How Existing Indian Law Partially Addresses Deepfakes

India does not have a dedicated statute addressing deepfake technology. Regulation is instead drawn from a patchwork of existing legislation governing intermediary liability, criminal conduct, and data protection. While these provisions offer some remedies, they are collectively insufficient to address the specific and novel challenges that deepfakes present.

The table below provides an overview of the principal existing legal provisions applicable to deepfakes and their limitations.

Legal Provision

Relevance to Deepfakes

Key Limitation

Section 79, IT Act 2000 (Safe harbour for intermediaries)

Intermediaries lose immunity if they fail to act after obtaining actual knowledge of unlawful content

No specified timeframe for deepfake removal; no definition of deepfake content

Rule 3(1)(b), IT Rules 2021 (Prohibited content categories)

Platforms must prohibit defamatory, obscene, privacy-infringing, and impersonation content

Deepfakes not expressly mentioned; application depends on case-by-case characterisation

Rule 3(2), IT Rules 2021 (24-hour removal obligation)

Platforms must remove specific categories of content within 24 hours of complaint

Applies to nudity and explicit content; no explicit coverage of non-explicit deepfakes

BNS 2023 (Defamation and obscenity provisions)

Deepfakes that defame or involve explicit content may attract criminal liability

Reactive remedy only; does not enable rapid content removal or prevention

DPDP Act, 2023 (Personal data protection)

A person's image or voice may constitute personal data; unauthorised use in deepfakes may attract liability

Focused on data processing compliance rather than content misuse; deepfake-specific application not yet defined

Article 21 (Right to privacy as fundamental right)

State has constitutional obligation to protect citizens against identity misuse through deepfakes

Provides constitutional basis for regulation rather than an operational remedy

Section 79 of the IT Act and the Safe Harbour Framework: What It Provides and Where It Falls Short

Section 79 of the Information Technology Act, 2000 is the foundational provision governing intermediary liability in India. It provides that an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted on its platform, provided that the intermediary does not initiate the transmission, does not select the receiver, and does not select or modify the information in the transmission. This safe harbour is conditional: if an intermediary receives actual knowledge that its platform is hosting unlawful content and fails to expeditiously remove it, the immunity is lost.

In the context of deepfakes, Section 79 becomes relevant when such content is hosted on social media platforms, video sharing services, or other digital intermediaries. However, the provision does not specify any timeframe within which removal must occur, does not define what constitutes actual knowledge in the context of deepfake-specific harms, and provides no guidance on the technical or procedural steps that platforms must take to identify deepfake content in the first instance.

The IT Rules, 2021 supplement Section 79 by introducing time-bound removal obligations. Rule 3(2) requires intermediaries to remove certain categories of content within 24 hours of receiving a complaint. These categories include content depicting nudity, sexual acts, and morphing of images. Deepfakes that fall within these categories may attract the 24-hour removal obligation. However, non-explicit deepfakes that cause severe reputational or political harm are not clearly covered, and the absence of a statutory definition of deepfake content means that classification disputes are inevitable.

The 3-Hour Takedown Rule: What It Proposes and Why Its Legal Status Remains Uncertain

The 3-hour takedown rule is a proposed policy intervention designed to ensure the rapid removal of harmful deepfake content before it can spread to a scale where the harm becomes irreversible. It is not currently codified in any Indian statute or regulation. Its proponents argue that the viral speed of digital content dissemination makes longer removal timelines inadequate: a deepfake video can be viewed by millions of people within hours of being uploaded, and a 24-hour or 48-hour removal obligation provides insufficient protection against this pace of harm.

The table below analyses the arguments for and against the 3-hour takedown rule from legal, practical, and constitutional perspectives.

Dimension

Arguments in Favour

Arguments Against

Harm prevention

Rapid removal contains the spread of deepfake content before it becomes viral; directly reduces the quantum of harm

Content cannot always be accurately identified as a deepfake within three hours; rapid removal of incorrectly identified content causes harm to legitimate creators

Constitutional validity

Restrictions on privacy-violating and defamatory content are permissible under Article 19(2); the state has a positive obligation to protect dignity under Article 21

May conflict with the principle in Shreya Singhal v. Union of India that intermediaries must act on court orders or government notifications rather than private complaints

Procedural fairness

Speed of action is essential where content causes immediate and severe harm

Content creators have no meaningful opportunity to be heard before their content is removed; natural justice principles are compromised

Platform feasibility

Major platforms have sufficient technical capacity to implement rapid removal mechanisms

Smaller platforms lack the detection tools and compliance resources to implement 3-hour removal reliably; creates unequal burden

Risk of censorship

Focused on genuinely harmful content such as explicit non-consensual material

Platforms may over-remove content to avoid liability; chilling effect on legitimate expression under Article 19(1)(a)

Misuse potential

Rapid response to verified harmful content

Bad actors can use complaint mechanisms to have legitimate content removed on a false deepfake characterisation

The Constitutional Framework: How Articles 19 and 21 Shape What Deepfake Regulation Can Legally Do

Any regulation of deepfake content in India must operate within the constitutional framework established by Part III of the Constitution. Two provisions are of central importance.

Article 19(1)(a) guarantees every citizen the right to freedom of speech and expression, including online expression. This right is not absolute. Article 19(2) permits the state to impose reasonable restrictions on this freedom in the interests of, among other things, the security of the state, public order, decency or morality, defamation, and contempt of court. Deepfake content that is defamatory, explicitly obscene, or likely to cause public disorder may legitimately be restricted under Article 19(2), provided the restriction is reasonable and proportionate.

Article 21 guarantees the right to life and personal liberty, which the Supreme Court has expansively interpreted to include the right to privacy, dignity, and personal reputation. The misappropriation of a person's identity through deepfake technology directly engages these rights. In K.S. Puttaswamy v. Union of India (2017), the Supreme Court held that privacy is a fundamental right and that the state has an obligation to protect individuals against privacy violations, including those arising from the misuse of personal data.

The table below sets out the constitutional framework applicable to deepfake regulation.

Constitutional Provision

Relevance to Deepfake Regulation

Key Principle

Article 19(1)(a)

Protects freedom of expression online, including satirical or artistic use of deepfake technology

Any restriction on deepfake content must be a reasonable restriction under Article 19(2) and must not suppress legitimate expression

Article 19(2)

Permits restrictions on speech that constitutes defamation, obscenity, or threat to public order

Provides constitutional basis for regulating harmful deepfakes that fall within these categories

Article 21

Protects privacy, dignity, and personal liberty against identity misuse through deepfakes

State has a positive obligation to protect citizens against deepfake-based privacy violations

Shreya Singhal v. Union of India (2015)

Established that intermediaries must act on court orders or government notifications, not private complaints

A 3-hour takedown regime based on private complaints may not satisfy this standard without adequate judicial or governmental oversight

K.S. Puttaswamy v. Union of India (2017)

Established privacy as a fundamental right; any interference must satisfy legality, necessity, and proportionality

Deepfake regulation must itself satisfy these tests; overbroad regulation may violate the right to expression

The Practical Challenges: Why Even a Well-Designed Takedown Rule Faces Serious Implementation Problems

Even setting aside the constitutional questions, a rapid deepfake takedown regime faces substantial practical challenges that must be addressed for any regulatory framework to be effective.

The table below sets out the principal practical challenges and potential approaches to addressing each.

Challenge

Nature of the Problem

Potential Approach

Detection accuracy

Deepfake identification is technically complex; automated systems produce false positives and false negatives; human verification is time-consuming

Investment in AI-assisted detection tools; requirement that platforms invest in detection capacity proportionate to their size and reach

Inconsistency across platforms

Different platforms have different content moderation policies and technical capabilities; without uniform standards, enforcement is uneven

Statutory minimum standards for deepfake detection and removal applicable to all intermediaries above a defined threshold

Misuse of complaint mechanisms

Bad actors can file false complaints to have legitimate content removed; urgent timelines reduce the opportunity for meaningful review

Penalties for false or malicious complaints; requirement of sworn declaration accompanying complaints about deepfake content

Resource disparity between large and small platforms

Major platforms have advanced detection systems and large compliance teams; smaller platforms do not

Tiered obligations proportionate to platform size and capacity; regulatory support for smaller platforms in building detection capability

Cross-border content

Deepfake content may be hosted on servers outside India, placing it beyond the immediate reach of domestic regulation

International regulatory cooperation; blocking orders for content hosted on non-compliant foreign platforms

Irreversibility of spread

Even where content is removed quickly, it may already have been downloaded, shared, and re-uploaded

Focus on speed of initial removal combined with obligations to prevent re-upload of identified deepfake content

What a Coherent Deepfake Regulatory Framework for India Should Look Like

The absence of a dedicated deepfake statute in India is a significant gap in the legal framework. The following table sets out the key elements that a comprehensive deepfake regulatory framework should include, drawing on the constitutional requirements, the existing legal architecture, and the practical challenges identified above.

Element

Description

Constitutional and Practical Basis

Statutory definition of deepfakes

Clear legal definition distinguishing harmful deepfakes from legitimate uses including satire, parody, and artistic expression

Provides legal certainty; prevents over-application of regulatory provisions to protected expression

Tiered takedown timelines

Faster timelines for non-consensual explicit content; longer timelines with verification requirements for other categories

Proportionate approach consistent with the reasonableness requirement under Article 19(2)

Procedural safeguards

Mandatory notification to content creator; opportunity to respond before removal is permanent; appeal mechanism

Satisfies natural justice requirements; prevents arbitrary censorship

Judicial or government oversight

Removal obligations triggered by court orders, government notifications, or a designated regulatory authority rather than private complaints alone

Consistent with Shreya Singhal principles; prevents private censorship

Detection technology investment

State and intermediary cooperation in developing and deploying accurate deepfake detection tools

Addresses the technical feasibility challenge; reduces risk of false positive removals

Platform accountability

Civil and criminal liability for platforms that fail to comply with removal obligations after acquiring actual knowledge

Strengthens enforcement; provides deterrence against non-compliance

Criminal liability for creators

Specific criminal offence for the creation and distribution of harmful deepfakes

Addresses the gap in the BNS 2023 which does not specifically criminalise deepfake creation

Public awareness programme

State-funded education initiative on deepfake identification and the risks of sharing unverified content

Addresses the demand side of the deepfake problem; reduces circulation through informed consumption

Conclusion: India Needs a Dedicated Deepfake Law That Balances Protection with Constitutional Fidelity

Deepfake technology represents a qualitatively new form of digital harm that India's existing legal framework is not equipped to fully address. The Information Technology Act, the IT Rules of 2021, the Bharatiya Nyaya Sanhita, and the Digital Personal Data Protection Act each provide partial remedies, but none of them was designed with deepfakes in mind, and none of them addresses the specific combination of technical sophistication, rapid dissemination, and severe personal and societal harm that characterises deepfake content.

The 3-hour takedown rule reflects a genuine and urgent concern about the pace at which digital harm spreads. But urgent concern does not validate unconstitutional implementation. A takedown regime that bypasses judicial oversight, eliminates procedural safeguards, and compels platforms to act on private complaints without independent verification risks producing a system of private censorship that causes more harm to legitimate expression than it prevents from deepfake technology.

India needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework: one that defines the harm clearly, establishes proportionate and tiered responses, maintains procedural fairness for content creators, preserves the role of courts and government in authorising removal, and invests in the technical capacity necessary for accurate and efficient enforcement. The balance between privacy, dignity, and free expression is not an obstacle to deepfake regulation. It is the constitutional requirement that makes effective and legitimate regulation possible.

Frequently Asked Questions (FAQs) on Deepfake Regulation and the 3-Hour Takedown Rule in India

  1. What is a deepfake and why is it legally significant? A deepfake is artificial audio-visual content created using AI tools that make a real person appear to say or do something they never said or did. It is legally significant because it may constitute defamation, privacy violation, identity misuse, obscenity, or electoral interference depending on its content and distribution.


  2. Is there a specific law in India that addresses deepfakes? No. India currently has no dedicated deepfake legislation. Regulation is drawn from the IT Act 2000, IT Rules 2021, BNS 2023, and DPDP Act 2023, none of which specifically addresses deepfake technology or provides a comprehensive regulatory framework for it.


  3. What does Section 79 of the IT Act provide in relation to deepfakes? Section 79 provides a safe harbour to intermediaries from liability for third-party content, conditional on the intermediary not having actual knowledge of the unlawful content and taking expeditious action upon receiving such knowledge. It does not specify timelines for deepfake removal or define deepfake content.


  4. What is the 3-hour takedown rule and is it currently law in India? The 3-hour takedown rule is a proposed policy concept that would require platforms to remove harmful deepfake content within three hours of a complaint. It is not currently enacted in any Indian statute or regulation and its constitutional validity remains untested.


  5. Does the Shreya Singhal judgment affect the 3-hour takedown rule? Yes. The Supreme Court in Shreya Singhal v. Union of India held that intermediaries are required to act on court orders or government notifications rather than private complaints. A takedown regime based on private complaints without judicial or governmental oversight may not satisfy this standard.


  6. How does the right to privacy under Article 21 apply to deepfakes? The Supreme Court in K.S. Puttaswamy v. Union of India established privacy as a fundamental right. Deepfakes that misappropriate a person's identity directly violate this right, creating a constitutional obligation on the state to protect individuals against such harm through appropriate legislation and enforcement.


  7. What are the constitutional risks of a strict rapid takedown rule? A strict rapid takedown rule may create a chilling effect on freedom of expression under Article 19(1)(a), enable private censorship by shifting regulatory power from courts to platforms, compromise natural justice by removing content without notice or opportunity to respond, and bypass the judicial oversight framework established in Shreya Singhal.


  8. What should a comprehensive Indian deepfake law include? A comprehensive framework should include a statutory definition of deepfakes, tiered takedown timelines proportionate to the severity of harm, procedural safeguards including creator notification and appeal mechanisms, judicial or governmental oversight of removal decisions, investment in detection technology, criminal liability for deepfake creators, and public awareness programmes.


Key Takeaways: Everything You Must Know About Deepfake Regulation and the Legal Framework in India

Deepfakes are AI-generated audio-visual content that make real people appear to say or do things they never did, causing personal harm through reputation damage and privacy violation and societal harm through disinformation and electoral interference.

India has no dedicated deepfake legislation; existing regulation draws on Section 79 of the IT Act 2000, the IT Rules 2021, the BNS 2023, and the DPDP Act 2023, each of which provides only partial coverage.

The IT Rules 2021 impose a 24-hour removal obligation for explicit and morphed content under Rule 3(2), but deepfakes are not expressly defined or comprehensively covered in the Rules.

The 3-hour takedown rule is a proposed policy concept, not current law, designed to ensure rapid removal of harmful content before it goes viral; its constitutional validity has not been judicially determined.

The Shreya Singhal v. Union of India judgment requires that intermediary removal obligations be triggered by court orders or government notifications rather than private complaints, creating a constitutional constraint on rapid takedown regimes based on private complaint systems.

The right to privacy as a fundamental right under Article 21, established in K.S. Puttaswamy v. Union of India, creates a constitutional obligation on the state to protect citizens against identity misuse through deepfakes.

Practical challenges to effective deepfake regulation include detection accuracy limitations, inconsistency across platforms, misuse of complaint mechanisms, resource disparities between large and small platforms, and cross-border content hosting.

A proportionate regulatory framework requires a statutory definition of deepfakes, tiered removal timelines, procedural safeguards, judicial or government oversight, detection technology investment, and criminal liability for creators.

Any restriction on deepfake content must satisfy the reasonableness requirement under Article 19(2) and the proportionality standard established in K.S. Puttaswamy; overbroad regulation may itself violate constitutional rights.

India urgently needs a dedicated, comprehensive, and constitutionally grounded deepfake regulatory framework that balances the protection of personal dignity and privacy against the constitutional guarantee of freedom of expression.

References

The Information Technology Act, 2000: The primary legislation governing intermediary liability in India, containing Section 79 on safe harbour protection and the conditions under which it is lost when intermediaries fail to act on actual knowledge of unlawful content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: The subordinate legislation imposing content moderation obligations on intermediaries, including the 24-hour removal requirement for specific categories of harmful content under Rule 3(2).

The Bharatiya Nyaya Sanhita, 2023: The successor to the Indian Penal Code, containing provisions on defamation and obscenity that are applicable to certain categories of deepfake content but do not specifically address deepfake technology.

The Digital Personal Data Protection Act, 2023: The legislation governing the processing of personal data in India, potentially applicable to the unauthorised use of a person's image or voice in deepfake creation but not specifically designed to address deepfake content misuse.

The Constitution of India, 1950: The foundational document containing Articles 19(1)(a) and 19(2) governing freedom of expression and its permissible restrictions, and Article 21 governing the right to life, personal liberty, privacy, and dignity.

Shreya Singhal v. Union of India, (2015) 5 SCC 1: The Supreme Court decision establishing that intermediaries must act on court orders or government notifications rather than private complaints, with direct implications for the constitutional validity of rapid complaint-based takedown regimes.

K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1: The nine-judge bench decision establishing privacy as a fundamental right under Article 21 and requiring that any interference with privacy satisfy the tests of legality, necessity, and proportionality.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.