Like0

The Algorithmic Accountability: A Critical Analysis of India's DPDP Act 2023 and the Emerging AI Governance Framework in 2025-26 | Data Privacy, Generative AI, and the Legal Frontiers India Must Now Confront

The Algorithmic Accountability: A Critical Analysis of India's DPDP Act 2023 and the Emerging AI Governance Framework in 2025-26 | Data Privacy, Generative AI, and the Legal Frontiers India Must Now Confront

The Algorithmic Accountability: A Critical Analysis of India's DPDP Act 2023 and the Emerging AI Governance Framework in 2025-26 | Data Privacy, Generative AI, and the Legal Frontiers India Must Now Confront

The Algorithmic Accountability: A Critical Analysis of India's DPDP Act 2023 and the Emerging AI Governance Framework in 2025-26 | Data Privacy, Generative AI, and the Legal Frontiers India Must Now Confront

When the Algorithm Knows More About You Than Your Government Does: Understanding Why India's DPDP Act 2023 Is the Most Important Law You Have Never Read

Think of your personal data as a shadow. It follows you everywhere: every search you conduct, every purchase you make, every form you fill, every prompt you type into an AI system. Unlike a physical shadow, however, this digital one does not disappear when the light changes. It accumulates, is stored, is processed, is sold, and is fed into machine learning models that use it to predict, profile, and in some cases make consequential decisions about your life. Who governs this shadow? Until recently, in India, almost nobody did.

The notification of the Digital Personal Data Protection Act, 2023 and the subsequent implementation of the DPDP Rules in early 2025 represent India's most serious legislative attempt to answer that question. Together, they mark the transition from what legal scholars have described as a decade of data-privacy vacuum into a structured statutory regime that one might justifiably call India's Cyber Constitution.

But the timing of this transition is not coincidental with its greatest challenge. The same years that saw the DPDP Act's passage have also witnessed the most explosive growth of Generative Artificial Intelligence in history. Systems that train on billions of data points, generate content indistinguishable from human output, and make autonomous decisions affecting individual lives are now embedded in the commercial and governmental fabric of India. The question this article sets out to examine is whether the DPDP Act's framework is adequate to hold these invisible algorithms accountable to the visible law of the land.

The Constitutional Foundation: Why Puttaswamy Is the Bedrock of Everything That Follows

No analysis of the DPDP Act, 2023 can begin without acknowledging the constitutional foundation on which it rests. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, a nine-judge constitutional bench of the Supreme Court unanimously held that the right to privacy is a fundamental right under Article 21 of the Constitution of India. The judgment recognised, with particular prescience, that informational privacy, the right of individuals to control information about themselves, is a core dimension of this fundamental right.

The Puttaswamy judgment did not merely establish a constitutional right in the abstract. It established a framework within which any interference with that right must be assessed. Interference with privacy must satisfy three tests: it must be grounded in law, it must pursue a legitimate aim, and it must be proportionate to that aim. Every provision of the DPDP Act, every SDF guideline from MeitY, and every order of the Data Protection Board must be measured against these standards.

The significance of this constitutional foundation cannot be overstated. Unlike data protection frameworks in jurisdictions where privacy is a statutory right subject to legislative revision, India's DPDP Act rests on a fundamental right that cannot be abridged by ordinary legislation. This creates both a floor of protection that the Act must meet and a ceiling of interference that it must not breach.

The table below sets out the constitutional architecture within which the DPDP Act operates.

Constitutional Element

Content

Implication for DPDP Act

Article 21 (Right to Life and Personal Liberty)

Includes informational privacy as recognised in Puttaswamy

The DPDP Act must meet constitutional standards; it cannot authorise interference with privacy that fails the Puttaswamy proportionality test

Puttaswamy Proportionality Test

Interference with privacy must be legal, necessary, and proportionate

State exemptions under the DPDP Act must satisfy this test; overbroad exemptions are constitutionally vulnerable

Article 14 (Equality)

Equal protection of all persons before the law

DPDP protections must be available equally to all individuals regardless of their digital literacy or economic status

Article 19(1)(a) (Freedom of Expression)

Includes the right to receive and impart information

Tension between data protection and the free flow of information must be resolved through proportionality

The Legal Architecture of the DPDP Act 2023: What the Framework Actually Provides

The Digital Personal Data Protection Act, 2023, unlike its predecessor the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, introduces a conceptually coherent and contemporary framework for data governance. The 2011 Rules were primarily cybersecurity instruments drafted before the era of platform-scale data collection. The DPDP Act is specifically designed for a world in which personal data is the primary raw material of commercial and governmental activity.

The table below sets out the key structural elements of the DPDP Act and their significance.

Element

Provision

Content

Significance

Data Fiduciary

Definition and Sections 5 to 10

Any entity that determines the purpose and means of processing personal data

Establishes the primary locus of accountability; AI companies processing Indian citizens' data fall squarely within this definition

Consent framework

Sections 5 and 6

Consent must be free, specific, informed, and unconditional; Notice must be provided

Creates the core legal basis for data processing; challenges AI's mass data collection model

Data Principal Rights

Sections 11 to 14

Rights of access, correction, erasure, and grievance redressal

Empowers individuals to exercise control over their data; creates the legal basis for Machine Unlearning demands

Right to Erasure

Section 12

Data Fiduciary must erase personal data on request

Creates the most technically challenging obligation for AI systems

Significant Data Fiduciaries

Section 10 and DPDP Rules 2025

SDFs subject to enhanced obligations including DPIA, data protection officer appointment, and audits

Creates a risk-proportionate compliance tier for the largest and most consequential data processors

Data Protection Board

Section 18 onwards

Specialised regulatory and adjudicatory body for data disputes

Provides faster and more expert dispute resolution than traditional civil courts

Penalties

Section 33

Up to Rs 250 crores for significant breaches

Creates meaningful financial deterrence for non-compliance

Accountability

Section 8

Data Fiduciary must ensure accuracy of personal data

Shifts from no-fault to strict accountability; AI hallucination directly engages this provision

Data Fiduciaries in the Age of AI: When OpenAI and Google Are Processing Indian Citizens' Data

The concept of the Data Fiduciary is the most consequential structural element of the DPDP Act for the AI governance debate. Any entity that determines the purpose and means of processing personal data is a Data Fiduciary, and any entity that processes data on the instructions of a Fiduciary is a Data Processor. The DPDP Rules, 2025, have clarified that the term processing includes training, fine-tuning, and the prompt-response cycle of AI systems.

This clarification is significant in its practical implications. When OpenAI trains GPT models on data that includes content generated by Indian citizens, it is processing personal data. When Google processes Indian users' search queries, location data, and behavioural patterns through its AI systems, it is processing personal data. When a domestic Indian AI startup fine-tunes a model on scraped public data from Indian websites, it is processing personal data. All of these entities are Data Fiduciaries under the DPDP Act and are subject to its obligations.

The question of whether extraterritorial application of the Act to foreign AI companies is legally and practically enforceable is one of the most pressing unresolved questions in Indian data law. The Act's provisions extend to processing of personal data of Indian data principals regardless of where the processing takes place, but the mechanisms for enforcement against non-Indian entities remain to be tested in regulatory and judicial practice.

The table below illustrates the Data Fiduciary obligations as they apply to different categories of AI actors.

AI Actor

Data Fiduciary Status

Processing Activities Covered

Key Compliance Obligations

Large foreign AI platforms (OpenAI, Google, Meta)

Data Fiduciary for Indian data principals

Training, fine-tuning, inference, prompt-response cycles

Consent, Notice, Data Principal rights, potential SDF designation

Indian AI startups using scraped data

Data Fiduciary

Training on public and scraped data; model deployment

Same obligations as above; enforcement more immediately practicable

Indian businesses deploying third-party AI

Data Fiduciary for their users' data; Data Processor relationship with AI provider

User interaction data processed through AI systems

Accountability for AI provider's processing of user data

Government agencies using AI

Data Fiduciary with potential statutory exemptions

Processing of citizen data through AI systems

Subject to Act with potentially broad exemptions; constitutionality of exemptions subject to Puttaswamy test

Consent, Notice, and the AI Scraping Problem: Is Most AI Training Legally Untenable Under Indian Law?

Sections 5 and 6 of the DPDP Act establish the consent framework at the heart of the legislation. Consent must be free, specific, informed, and unconditional. Before obtaining consent, the Data Fiduciary must provide a Notice to the Data Principal that clearly specifies what data is being collected, the purpose for which it will be processed, and how the Principal may exercise their rights.

This framework creates an immediate and serious challenge for the AI training model as currently practised. Generative AI systems are trained on massive datasets that may contain billions of individual data points, sourced from web scraping, public databases, licensed datasets, and user interactions. The individuals whose data appears in these datasets have in almost no case provided the specific, informed consent that the DPDP Act requires. They have not received a Notice from the AI company specifying that their personal data will be used to train a large language model. They have not been given an opportunity to withhold consent or to exercise their rights before the processing took place.

This analytical gap is not a marginal compliance concern. It suggests that much of the AI training currently conducted on Indian citizens' data is operating on legally uncertain ground under the DPDP Act's consent framework. The Act does provide for legitimate uses other than consent, including processing for legal obligations, safety emergencies, and legitimate state functions, but it is not clear that commercial AI training falls within any of these categories.

The table below analyses the consent framework challenges specific to AI training.

Challenge

Description

Legal Implication Under DPDP Act

Mass scraping without Notice

AI systems train on billions of scraped data points without providing individual notices to data subjects

Potential violation of Section 6 Notice requirement for every individual whose personal data was scraped

Absence of specific consent

Consent for AI training is not obtained as a separate, specific act; web browsing terms of service do not constitute specific informed consent for model training

Sections 5 and 6 require consent to be specific to the purpose; general terms of service consent is insufficient

Retroactive consent impossibility

For already-trained models, the consent that should have been obtained before training cannot practically be obtained after the fact

Creates an ongoing compliance gap for existing models; potentially requires remediation through alternative legal basis

Purpose limitation

Data collected for one purpose cannot be used for another without fresh consent

Training AI models on data collected for other purposes requires a new legal basis or fresh consent

Cross-border processing

Data of Indian principals processed abroad may escape immediate enforcement

Creates regulatory gap that requires international regulatory cooperation to address

AI Governance and the Significant Data Fiduciary Framework: The 2025-26 Regulatory Developments

As of early 2026, MeitY has introduced specific guidelines for Significant Data Fiduciaries, a designation that applies to entities whose scale, sensitivity of data processing, or potential impact on individuals and national security places them in a category requiring enhanced regulatory oversight. The SDF framework represents the most important recent development in the operationalisation of the DPDP Act for AI governance.

Data Protection Impact Assessments: The DPIA as a Governance Tool

SDFs are required to conduct Data Protection Impact Assessments before deploying new data processing activities, including AI systems. A DPIA is not merely a compliance formality. It is a substantive analytical exercise that requires the Data Fiduciary to identify the privacy risks of a proposed processing activity, assess whether those risks are justified by the purpose of the activity, and implement measures to mitigate them.

For AI systems specifically, the DPIA framework creates an obligation to analyse algorithmic bias, the risk that an AI model may discriminate against individuals based on protected data categories. An AI hiring system that discriminates against women, or a credit scoring model that disadvantages members of particular communities, creates risks that a properly conducted DPIA should identify before the system is deployed. The DPIA requirement therefore functions as a pre-deployment safeguard against harms that might otherwise only be discovered after they have affected thousands or millions of individuals.

The Right to Erasure and the Machine Unlearning Problem

Section 12 of the DPDP Act grants every Data Principal the right to request erasure of their personal data. This right is conceptually simple but technically formidable when applied to AI systems. Once a data point has been incorporated into a trained AI model, removing the influence of that specific data point from the model's parameters, a process researchers call Machine Unlearning, is extremely difficult and in most cases practically impossible with current technology.

This creates one of the sharpest tensions in the DPDP Act's application to AI. If an Indian citizen invokes their right to erasure under Section 12 and the Data Fiduciary cannot remove the citizen's data from its AI model because Machine Unlearning is technically infeasible with the model's architecture, the Fiduciary is in potential violation of the Act, with penalties of up to Rs 250 crores for significant breaches.

The legal implication is clear and the challenge it creates is genuinely novel. Unlike conventional databases from which a record can simply be deleted, AI models do not store data as retrievable records. The training data is transformed into model weights and parameters that represent the statistical patterns in the original data rather than the data itself. Complying with erasure requests therefore requires either significant advances in Machine Unlearning technology, architectural decisions at the model design stage that build in erasure capacity, or a judicial or legislative determination that the Right to Erasure under Section 12 applies differently to AI models than to conventional databases.

The Black Box Problem: Accountability, Transparency, and AI Hallucination Under the DPDP Act

One of the most intellectually challenging dimensions of AI governance under the DPDP Act is what technologists call the Black Box problem. Modern large language models and deep neural networks are extraordinarily complex systems whose internal reasoning processes cannot be fully understood or explained even by their developers. When such a system makes a decision about an individual, whether recommending a loan approval, generating a profile, or flagging a communication for review, neither the developer nor the regulator can provide a complete explanation of how that decision was reached.

Transparency is a key requirement of the DPDP Act. The Notice and consent framework requires Data Fiduciaries to clearly explain the purpose of processing to Data Principals. The accountability framework requires Fiduciaries to be able to demonstrate compliance with the Act's requirements. Both of these requirements are in tension with the fundamental opacity of Black Box AI systems.

Section 8 of the Act places a direct duty on Data Fiduciaries to ensure the accuracy of personal data they process. This provision engages directly with the phenomenon of AI hallucination, the tendency of large language models to generate false, fabricated, or misleading information with apparent confidence. When an AI system generates a false profile of an individual, creates inaccurate financial assessments, or produces fabricated personal data, the Data Fiduciary is directly liable under Section 8. This shift from the no-fault environment that characterised India's previous IT framework to the strict accountability standard of the DPDP Act is perhaps the most significant development in Indian tech law in a generation.

The table below illustrates how the Black Box problem engages specific DPDP Act provisions.

AI Characteristic

DPDP Act Requirement

Tension

Opacity of decision-making

Transparency in Notice; ability to explain processing to Data Principals

Data Fiduciary cannot explain what it cannot itself understand

AI hallucination

Section 8 duty to ensure accuracy of personal data

Fiduciary is strictly liable for false data generated by its AI system

Automated profiling

Right of Data Principal to know about automated processing

Profiling through AI may occur without the Data Principal's knowledge

Algorithmic bias

DPIA requirement for SDFs to identify discrimination risks

Bias may be emergent from training data rather than intentional design

Inability to erasure data from trained models

Section 12 Right to Erasure

Technical impossibility of Machine Unlearning creates compliance gap

India Versus the EU AI Act: Two Philosophies of Technology Regulation

The contrast between India's data-centric approach under the DPDP Act and the European Union's technology-centric approach under the EU AI Act reflects a fundamental difference in regulatory philosophy that will shape the global governance of AI for years to come.

The EU AI Act, implemented in 2024-25, regulates AI systems themselves on the basis of their risk level. High-risk AI applications, those affecting employment, credit, law enforcement, and critical infrastructure, are subject to stringent requirements including transparency obligations, human oversight, and conformity assessments. Prohibited AI practices, including social scoring by governments and real-time biometric surveillance in public spaces, are banned outright. The Act's target is the technology and its use, not the data that feeds it.

India's DPDP Act takes a different starting point. Its primary focus is the sovereignty of the individual over their personal data. It regulates what can be done with personal data, who can do it, and on what legal basis, rather than regulating the AI systems that process that data. This approach is more flexible for innovation: an AI company can deploy almost any system it chooses as long as it processes personal data in compliance with the consent, Notice, and accountability framework of the Act.

The table below compares the two approaches and their implications.

Dimension

India DPDP Act 2023

EU AI Act 2024-25

Regulatory target

Personal data and its processing

AI systems and their deployment

Primary mechanism

Consent, Notice, Data Principal rights, accountability

Risk classification, conformity assessment, human oversight

Innovation flexibility

Higher; less direct regulation of AI systems themselves

Lower; high-risk AI faces significant pre-deployment requirements

Individual empowerment

High in theory; burden of proof on individual during grievances

High for prohibited practices; systematic protections for high-risk AI

Accountability model

Data Fiduciary accountability for processing outcomes

Developer accountability for system design and deployment

Gap for AI governance

Does not directly regulate AI systems; Black Box problem unaddressed

Addresses AI systems directly but may create compliance barriers for smaller innovators

Future convergence potential

India may need dedicated AI Governance Act to complement DPDP

EU approach may need data governance complement for coherent framework

India's approach is more conducive to the domestic AI startup ecosystem and avoids the prescriptive technology mandates that some argue chill innovation. However, it places a higher burden on individual Data Principals who must initiate grievance proceedings to enforce their rights, and it does not directly address the systemic risks of AI deployment in high-stakes domains that the EU AI Act specifically targets.

The Data Protection Board: A New Institutional Architecture for the Digital Age

The establishment of the Data Protection Board of India in late 2025 represents a crucial institutional development that complements the DPDP Act's substantive framework. The DPB provides a specialised forum for adjudicating digital data disputes, bypassing the delays and technical unfamiliarity of traditional civil courts.

The significance of the DPB for AI governance cannot be overstated. The cases it will face in 2026 and beyond will be among the most technically complex in Indian legal history: cases involving AI data scraping, Machine Unlearning demands, algorithmic discrimination, and the accountability of black box systems for data inaccuracies. These are not cases that a general civil court, however distinguished its bench, is well equipped to handle. The DPB, if properly constituted with members who combine legal expertise with technical competence, has the potential to develop a body of AI-specific data protection jurisprudence that will guide both enforcement and industry practice for decades.

The DPB's first major enforcement tests are expected to involve AI data scraping by both domestic and foreign platforms. The outcomes of these early cases will be defining: they will establish whether the DPDP Act's consent and accountability framework is a real constraint on AI companies or a paper tiger, and they will determine India's position in the global regulatory landscape as a serious data protection jurisdiction or one that prioritises innovation over individual rights.

Conclusion: The Invisible Algorithm Must Answer to the Visible Law

The DPDP Act 2023, implemented through the Rules of 2025 and operationalised through the Data Protection Board established in late 2025, is a genuine and important achievement in Indian data governance. It rests on the strongest possible constitutional foundation in the Puttaswamy judgment, introduces contemporary accountability standards, and creates institutional infrastructure capable of meaningful enforcement.

But it is not sufficient for the AI challenge it now faces. The consent and Notice framework, designed for identifiable and deliberate data collection, struggles to address the mass scraping and training processes of Generative AI. The Right to Erasure creates obligations that current AI technology cannot technically satisfy. The accountability framework for data accuracy is tested to its limits by the hallucination problem inherent in large language models. And the Black Box nature of modern AI creates transparency demands that the systems themselves cannot currently meet.

Three developments are certain as we look toward the rest of 2026 and beyond. The Data Protection Board will face its first major tests in cases involving AI data scraping and these cases will define the practical scope of the DPDP Act's consent requirements. Machine Unlearning will become the most significant area of legal-technical research in Indian data law, as the Right to Erasure encounters the realities of trained AI systems. And India will likely need a dedicated AI Governance Act, analogous in ambition if not identical in design to the EU AI Act, to complement the DPDP Act's data-centric framework with direct regulation of AI systems and their deployment.

The goal of Indian law is to create a Digital Nagrik, a digital citizen who is empowered, informed, and protected. The success of this vision depends on how effectively the legal system can hold the invisible algorithms of the data economy accountable to the visible law of the land. That accountability is not yet fully established. Building it is the defining task of Indian data law in the years ahead.

Frequently Asked Questions (FAQs) on the DPDP Act 2023 and AI Governance in India

  1. What is the DPDP Act 2023 and what does it govern? The Digital Personal Data Protection Act, 2023 is India's primary legislation governing the processing of personal data in digital form. It establishes obligations for Data Fiduciaries, rights for Data Principals, and creates the Data Protection Board of India as the regulatory and adjudicatory body.


  2. What constitutional provision underlies the DPDP Act? The DPDP Act rests on the recognition of the right to privacy as a fundamental right under Article 21 of the Constitution of India, established by the nine-judge bench in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1.


  3. Who is a Data Fiduciary under the DPDP Act and does this cover AI companies? A Data Fiduciary is any entity that determines the purpose and means of processing personal data. The DPDP Rules, 2025 clarify that processing includes AI training, fine-tuning, and prompt-response cycles, meaning AI companies processing Indian citizens' data are Data Fiduciaries subject to the Act's obligations.


  4. What are Significant Data Fiduciaries and what additional obligations apply to them? Significant Data Fiduciaries are entities designated by the government based on the volume and sensitivity of data they process and their potential impact on individuals. They are subject to enhanced obligations including Data Protection Impact Assessments, data protection officer appointments, and independent audits.


  5. What is the Machine Unlearning problem and why does it matter under the DPDP Act? Machine Unlearning refers to the technically difficult process of removing the influence of specific data points from a pre-trained AI model. Section 12 of the DPDP Act grants individuals the Right to Erasure, but current AI architectures generally cannot comply with individual erasure requests, creating a potential compliance gap that may attract penalties of up to Rs 250 crores.


  6. What is the Black Box problem in AI governance? The Black Box problem refers to the opacity of modern AI systems whose internal reasoning processes cannot be fully understood or explained by their developers. This creates tension with the DPDP Act's transparency and accountability requirements, particularly when AI systems generate false information about individuals, engaging Section 8's duty to ensure data accuracy.


  7. How does India's approach to AI regulation compare to the EU AI Act? India's DPDP Act regulates personal data and its processing, focusing on individual data sovereignty. The EU AI Act regulates AI systems themselves based on risk level. India's approach is more innovation-friendly but places a greater burden on individuals to enforce their rights, while the EU approach directly addresses systemic AI risks but may constrain innovation.


  8. What is the Data Protection Board of India and what role will it play in AI governance? The Data Protection Board of India was established in late 2025 as a specialised regulatory and adjudicatory body for data disputes. It will likely face its first major AI governance tests in cases involving data scraping and erasure demands against AI companies, with its early decisions expected to define the practical scope of the DPDP Act's application to AI.


Key Takeaways: Everything You Must Know About the DPDP Act 2023 and AI Governance in India

The DPDP Act 2023 and the DPDP Rules 2025 together constitute India's foundational data governance framework, resting on the constitutional right to privacy established in Puttaswamy v. Union of India (2017).

The Act establishes a Data Fiduciary accountability framework under which AI companies processing Indian citizens' data, including for training and fine-tuning purposes, are subject to consent, Notice, and accountability obligations.

The consent and Notice framework of Sections 5 and 6 creates significant legal uncertainty for AI training that relies on mass data scraping, as the specific, informed consent required by the Act has almost never been obtained for AI training purposes.

Significant Data Fiduciaries are subject to enhanced obligations including Data Protection Impact Assessments that must specifically address algorithmic bias risks.

The Right to Erasure under Section 12 creates the technically most challenging AI compliance obligation, as Machine Unlearning from pre-trained models is extremely difficult and in most cases currently infeasible.

The Black Box nature of modern AI creates tensions with the DPDP Act's transparency requirements and engages Section 8's strict accountability standard for data accuracy when AI systems hallucinate or generate false personal data.

The shift from no-fault to strict accountability under the DPDP Act, with penalties of up to Rs 250 crores for significant breaches, is the most consequential development in Indian technology law for the commercial AI sector.

India's data-centric regulatory approach differs fundamentally from the EU AI Act's technology-centric, risk-based approach; India's framework is more innovation-flexible but places a higher individual burden of rights enforcement.

The Data Protection Board of India, established in late 2025, will face its defining tests in 2026 through AI data scraping cases whose outcomes will determine the real-world scope of the DPDP Act's protections.

India will likely need a dedicated AI Governance Act to complement the DPDP Act's data-centric framework with direct regulation of AI systems, their risk levels, and their deployment in high-stakes domains.

References

The Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023): The primary legislation governing digital personal data protection in India, establishing the Data Fiduciary framework, consent requirements, Data Principal rights, and the Data Protection Board.

Digital Personal Data Protection Rules, 2025, notified by the Ministry of Electronics and Information Technology: The subordinate legislation implementing the DPDP Act, clarifying that AI training, fine-tuning, and prompt-response cycles constitute processing under the Act.

The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011: The predecessor regulatory framework for data protection in India, superseded in significant respects by the DPDP Act.

Justice K.S. Puttaswamy (Retd.) v. Union of India, [2017] 10 SCC 1: The nine-judge Supreme Court bench decision recognising the right to privacy as a fundamental right under Article 21 of the Constitution and establishing the proportionality framework for assessing interference with privacy.

Vinit Kumar v. Central Bureau of Investigation, [2019] SCC Online Bom 3155: The Bombay High Court decision on surveillance and privacy, providing judicial context for the constitutional protections applicable to digital data.

Ministry of Electronics and Information Technology, Report of the Committee of Experts on Non-Personal Data Governance Framework (2025): The government's policy framework for non-personal data governance, relevant to AI training on aggregated and anonymised datasets.

NITI Aayog, National Strategy for Artificial Intelligence: #AIforAll (Discussion Paper, updated 2025): The government's strategic framework for AI development, providing the policy context within which the DPDP Act's AI governance provisions must be understood.

Privacy in the Age of Algorithms, 14 Indian Journal of Law and Technology 45 (2025): Academic analysis of the intersection of algorithmic decision-making and privacy law under the emerging Indian data protection framework.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.



When the Algorithm Knows More About You Than Your Government Does: Understanding Why India's DPDP Act 2023 Is the Most Important Law You Have Never Read

Think of your personal data as a shadow. It follows you everywhere: every search you conduct, every purchase you make, every form you fill, every prompt you type into an AI system. Unlike a physical shadow, however, this digital one does not disappear when the light changes. It accumulates, is stored, is processed, is sold, and is fed into machine learning models that use it to predict, profile, and in some cases make consequential decisions about your life. Who governs this shadow? Until recently, in India, almost nobody did.

The notification of the Digital Personal Data Protection Act, 2023 and the subsequent implementation of the DPDP Rules in early 2025 represent India's most serious legislative attempt to answer that question. Together, they mark the transition from what legal scholars have described as a decade of data-privacy vacuum into a structured statutory regime that one might justifiably call India's Cyber Constitution.

But the timing of this transition is not coincidental with its greatest challenge. The same years that saw the DPDP Act's passage have also witnessed the most explosive growth of Generative Artificial Intelligence in history. Systems that train on billions of data points, generate content indistinguishable from human output, and make autonomous decisions affecting individual lives are now embedded in the commercial and governmental fabric of India. The question this article sets out to examine is whether the DPDP Act's framework is adequate to hold these invisible algorithms accountable to the visible law of the land.

The Constitutional Foundation: Why Puttaswamy Is the Bedrock of Everything That Follows

No analysis of the DPDP Act, 2023 can begin without acknowledging the constitutional foundation on which it rests. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, a nine-judge constitutional bench of the Supreme Court unanimously held that the right to privacy is a fundamental right under Article 21 of the Constitution of India. The judgment recognised, with particular prescience, that informational privacy, the right of individuals to control information about themselves, is a core dimension of this fundamental right.

The Puttaswamy judgment did not merely establish a constitutional right in the abstract. It established a framework within which any interference with that right must be assessed. Interference with privacy must satisfy three tests: it must be grounded in law, it must pursue a legitimate aim, and it must be proportionate to that aim. Every provision of the DPDP Act, every SDF guideline from MeitY, and every order of the Data Protection Board must be measured against these standards.

The significance of this constitutional foundation cannot be overstated. Unlike data protection frameworks in jurisdictions where privacy is a statutory right subject to legislative revision, India's DPDP Act rests on a fundamental right that cannot be abridged by ordinary legislation. This creates both a floor of protection that the Act must meet and a ceiling of interference that it must not breach.

The table below sets out the constitutional architecture within which the DPDP Act operates.

Constitutional Element

Content

Implication for DPDP Act

Article 21 (Right to Life and Personal Liberty)

Includes informational privacy as recognised in Puttaswamy

The DPDP Act must meet constitutional standards; it cannot authorise interference with privacy that fails the Puttaswamy proportionality test

Puttaswamy Proportionality Test

Interference with privacy must be legal, necessary, and proportionate

State exemptions under the DPDP Act must satisfy this test; overbroad exemptions are constitutionally vulnerable

Article 14 (Equality)

Equal protection of all persons before the law

DPDP protections must be available equally to all individuals regardless of their digital literacy or economic status

Article 19(1)(a) (Freedom of Expression)

Includes the right to receive and impart information

Tension between data protection and the free flow of information must be resolved through proportionality

The Legal Architecture of the DPDP Act 2023: What the Framework Actually Provides

The Digital Personal Data Protection Act, 2023, unlike its predecessor the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, introduces a conceptually coherent and contemporary framework for data governance. The 2011 Rules were primarily cybersecurity instruments drafted before the era of platform-scale data collection. The DPDP Act is specifically designed for a world in which personal data is the primary raw material of commercial and governmental activity.

The table below sets out the key structural elements of the DPDP Act and their significance.

Element

Provision

Content

Significance

Data Fiduciary

Definition and Sections 5 to 10

Any entity that determines the purpose and means of processing personal data

Establishes the primary locus of accountability; AI companies processing Indian citizens' data fall squarely within this definition

Consent framework

Sections 5 and 6

Consent must be free, specific, informed, and unconditional; Notice must be provided

Creates the core legal basis for data processing; challenges AI's mass data collection model

Data Principal Rights

Sections 11 to 14

Rights of access, correction, erasure, and grievance redressal

Empowers individuals to exercise control over their data; creates the legal basis for Machine Unlearning demands

Right to Erasure

Section 12

Data Fiduciary must erase personal data on request

Creates the most technically challenging obligation for AI systems

Significant Data Fiduciaries

Section 10 and DPDP Rules 2025

SDFs subject to enhanced obligations including DPIA, data protection officer appointment, and audits

Creates a risk-proportionate compliance tier for the largest and most consequential data processors

Data Protection Board

Section 18 onwards

Specialised regulatory and adjudicatory body for data disputes

Provides faster and more expert dispute resolution than traditional civil courts

Penalties

Section 33

Up to Rs 250 crores for significant breaches

Creates meaningful financial deterrence for non-compliance

Accountability

Section 8

Data Fiduciary must ensure accuracy of personal data

Shifts from no-fault to strict accountability; AI hallucination directly engages this provision

Data Fiduciaries in the Age of AI: When OpenAI and Google Are Processing Indian Citizens' Data

The concept of the Data Fiduciary is the most consequential structural element of the DPDP Act for the AI governance debate. Any entity that determines the purpose and means of processing personal data is a Data Fiduciary, and any entity that processes data on the instructions of a Fiduciary is a Data Processor. The DPDP Rules, 2025, have clarified that the term processing includes training, fine-tuning, and the prompt-response cycle of AI systems.

This clarification is significant in its practical implications. When OpenAI trains GPT models on data that includes content generated by Indian citizens, it is processing personal data. When Google processes Indian users' search queries, location data, and behavioural patterns through its AI systems, it is processing personal data. When a domestic Indian AI startup fine-tunes a model on scraped public data from Indian websites, it is processing personal data. All of these entities are Data Fiduciaries under the DPDP Act and are subject to its obligations.

The question of whether extraterritorial application of the Act to foreign AI companies is legally and practically enforceable is one of the most pressing unresolved questions in Indian data law. The Act's provisions extend to processing of personal data of Indian data principals regardless of where the processing takes place, but the mechanisms for enforcement against non-Indian entities remain to be tested in regulatory and judicial practice.

The table below illustrates the Data Fiduciary obligations as they apply to different categories of AI actors.

AI Actor

Data Fiduciary Status

Processing Activities Covered

Key Compliance Obligations

Large foreign AI platforms (OpenAI, Google, Meta)

Data Fiduciary for Indian data principals

Training, fine-tuning, inference, prompt-response cycles

Consent, Notice, Data Principal rights, potential SDF designation

Indian AI startups using scraped data

Data Fiduciary

Training on public and scraped data; model deployment

Same obligations as above; enforcement more immediately practicable

Indian businesses deploying third-party AI

Data Fiduciary for their users' data; Data Processor relationship with AI provider

User interaction data processed through AI systems

Accountability for AI provider's processing of user data

Government agencies using AI

Data Fiduciary with potential statutory exemptions

Processing of citizen data through AI systems

Subject to Act with potentially broad exemptions; constitutionality of exemptions subject to Puttaswamy test

Consent, Notice, and the AI Scraping Problem: Is Most AI Training Legally Untenable Under Indian Law?

Sections 5 and 6 of the DPDP Act establish the consent framework at the heart of the legislation. Consent must be free, specific, informed, and unconditional. Before obtaining consent, the Data Fiduciary must provide a Notice to the Data Principal that clearly specifies what data is being collected, the purpose for which it will be processed, and how the Principal may exercise their rights.

This framework creates an immediate and serious challenge for the AI training model as currently practised. Generative AI systems are trained on massive datasets that may contain billions of individual data points, sourced from web scraping, public databases, licensed datasets, and user interactions. The individuals whose data appears in these datasets have in almost no case provided the specific, informed consent that the DPDP Act requires. They have not received a Notice from the AI company specifying that their personal data will be used to train a large language model. They have not been given an opportunity to withhold consent or to exercise their rights before the processing took place.

This analytical gap is not a marginal compliance concern. It suggests that much of the AI training currently conducted on Indian citizens' data is operating on legally uncertain ground under the DPDP Act's consent framework. The Act does provide for legitimate uses other than consent, including processing for legal obligations, safety emergencies, and legitimate state functions, but it is not clear that commercial AI training falls within any of these categories.

The table below analyses the consent framework challenges specific to AI training.

Challenge

Description

Legal Implication Under DPDP Act

Mass scraping without Notice

AI systems train on billions of scraped data points without providing individual notices to data subjects

Potential violation of Section 6 Notice requirement for every individual whose personal data was scraped

Absence of specific consent

Consent for AI training is not obtained as a separate, specific act; web browsing terms of service do not constitute specific informed consent for model training

Sections 5 and 6 require consent to be specific to the purpose; general terms of service consent is insufficient

Retroactive consent impossibility

For already-trained models, the consent that should have been obtained before training cannot practically be obtained after the fact

Creates an ongoing compliance gap for existing models; potentially requires remediation through alternative legal basis

Purpose limitation

Data collected for one purpose cannot be used for another without fresh consent

Training AI models on data collected for other purposes requires a new legal basis or fresh consent

Cross-border processing

Data of Indian principals processed abroad may escape immediate enforcement

Creates regulatory gap that requires international regulatory cooperation to address

AI Governance and the Significant Data Fiduciary Framework: The 2025-26 Regulatory Developments

As of early 2026, MeitY has introduced specific guidelines for Significant Data Fiduciaries, a designation that applies to entities whose scale, sensitivity of data processing, or potential impact on individuals and national security places them in a category requiring enhanced regulatory oversight. The SDF framework represents the most important recent development in the operationalisation of the DPDP Act for AI governance.

Data Protection Impact Assessments: The DPIA as a Governance Tool

SDFs are required to conduct Data Protection Impact Assessments before deploying new data processing activities, including AI systems. A DPIA is not merely a compliance formality. It is a substantive analytical exercise that requires the Data Fiduciary to identify the privacy risks of a proposed processing activity, assess whether those risks are justified by the purpose of the activity, and implement measures to mitigate them.

For AI systems specifically, the DPIA framework creates an obligation to analyse algorithmic bias, the risk that an AI model may discriminate against individuals based on protected data categories. An AI hiring system that discriminates against women, or a credit scoring model that disadvantages members of particular communities, creates risks that a properly conducted DPIA should identify before the system is deployed. The DPIA requirement therefore functions as a pre-deployment safeguard against harms that might otherwise only be discovered after they have affected thousands or millions of individuals.

The Right to Erasure and the Machine Unlearning Problem

Section 12 of the DPDP Act grants every Data Principal the right to request erasure of their personal data. This right is conceptually simple but technically formidable when applied to AI systems. Once a data point has been incorporated into a trained AI model, removing the influence of that specific data point from the model's parameters, a process researchers call Machine Unlearning, is extremely difficult and in most cases practically impossible with current technology.

This creates one of the sharpest tensions in the DPDP Act's application to AI. If an Indian citizen invokes their right to erasure under Section 12 and the Data Fiduciary cannot remove the citizen's data from its AI model because Machine Unlearning is technically infeasible with the model's architecture, the Fiduciary is in potential violation of the Act, with penalties of up to Rs 250 crores for significant breaches.

The legal implication is clear and the challenge it creates is genuinely novel. Unlike conventional databases from which a record can simply be deleted, AI models do not store data as retrievable records. The training data is transformed into model weights and parameters that represent the statistical patterns in the original data rather than the data itself. Complying with erasure requests therefore requires either significant advances in Machine Unlearning technology, architectural decisions at the model design stage that build in erasure capacity, or a judicial or legislative determination that the Right to Erasure under Section 12 applies differently to AI models than to conventional databases.

The Black Box Problem: Accountability, Transparency, and AI Hallucination Under the DPDP Act

One of the most intellectually challenging dimensions of AI governance under the DPDP Act is what technologists call the Black Box problem. Modern large language models and deep neural networks are extraordinarily complex systems whose internal reasoning processes cannot be fully understood or explained even by their developers. When such a system makes a decision about an individual, whether recommending a loan approval, generating a profile, or flagging a communication for review, neither the developer nor the regulator can provide a complete explanation of how that decision was reached.

Transparency is a key requirement of the DPDP Act. The Notice and consent framework requires Data Fiduciaries to clearly explain the purpose of processing to Data Principals. The accountability framework requires Fiduciaries to be able to demonstrate compliance with the Act's requirements. Both of these requirements are in tension with the fundamental opacity of Black Box AI systems.

Section 8 of the Act places a direct duty on Data Fiduciaries to ensure the accuracy of personal data they process. This provision engages directly with the phenomenon of AI hallucination, the tendency of large language models to generate false, fabricated, or misleading information with apparent confidence. When an AI system generates a false profile of an individual, creates inaccurate financial assessments, or produces fabricated personal data, the Data Fiduciary is directly liable under Section 8. This shift from the no-fault environment that characterised India's previous IT framework to the strict accountability standard of the DPDP Act is perhaps the most significant development in Indian tech law in a generation.

The table below illustrates how the Black Box problem engages specific DPDP Act provisions.

AI Characteristic

DPDP Act Requirement

Tension

Opacity of decision-making

Transparency in Notice; ability to explain processing to Data Principals

Data Fiduciary cannot explain what it cannot itself understand

AI hallucination

Section 8 duty to ensure accuracy of personal data

Fiduciary is strictly liable for false data generated by its AI system

Automated profiling

Right of Data Principal to know about automated processing

Profiling through AI may occur without the Data Principal's knowledge

Algorithmic bias

DPIA requirement for SDFs to identify discrimination risks

Bias may be emergent from training data rather than intentional design

Inability to erasure data from trained models

Section 12 Right to Erasure

Technical impossibility of Machine Unlearning creates compliance gap

India Versus the EU AI Act: Two Philosophies of Technology Regulation

The contrast between India's data-centric approach under the DPDP Act and the European Union's technology-centric approach under the EU AI Act reflects a fundamental difference in regulatory philosophy that will shape the global governance of AI for years to come.

The EU AI Act, implemented in 2024-25, regulates AI systems themselves on the basis of their risk level. High-risk AI applications, those affecting employment, credit, law enforcement, and critical infrastructure, are subject to stringent requirements including transparency obligations, human oversight, and conformity assessments. Prohibited AI practices, including social scoring by governments and real-time biometric surveillance in public spaces, are banned outright. The Act's target is the technology and its use, not the data that feeds it.

India's DPDP Act takes a different starting point. Its primary focus is the sovereignty of the individual over their personal data. It regulates what can be done with personal data, who can do it, and on what legal basis, rather than regulating the AI systems that process that data. This approach is more flexible for innovation: an AI company can deploy almost any system it chooses as long as it processes personal data in compliance with the consent, Notice, and accountability framework of the Act.

The table below compares the two approaches and their implications.

Dimension

India DPDP Act 2023

EU AI Act 2024-25

Regulatory target

Personal data and its processing

AI systems and their deployment

Primary mechanism

Consent, Notice, Data Principal rights, accountability

Risk classification, conformity assessment, human oversight

Innovation flexibility

Higher; less direct regulation of AI systems themselves

Lower; high-risk AI faces significant pre-deployment requirements

Individual empowerment

High in theory; burden of proof on individual during grievances

High for prohibited practices; systematic protections for high-risk AI

Accountability model

Data Fiduciary accountability for processing outcomes

Developer accountability for system design and deployment

Gap for AI governance

Does not directly regulate AI systems; Black Box problem unaddressed

Addresses AI systems directly but may create compliance barriers for smaller innovators

Future convergence potential

India may need dedicated AI Governance Act to complement DPDP

EU approach may need data governance complement for coherent framework

India's approach is more conducive to the domestic AI startup ecosystem and avoids the prescriptive technology mandates that some argue chill innovation. However, it places a higher burden on individual Data Principals who must initiate grievance proceedings to enforce their rights, and it does not directly address the systemic risks of AI deployment in high-stakes domains that the EU AI Act specifically targets.

The Data Protection Board: A New Institutional Architecture for the Digital Age

The establishment of the Data Protection Board of India in late 2025 represents a crucial institutional development that complements the DPDP Act's substantive framework. The DPB provides a specialised forum for adjudicating digital data disputes, bypassing the delays and technical unfamiliarity of traditional civil courts.

The significance of the DPB for AI governance cannot be overstated. The cases it will face in 2026 and beyond will be among the most technically complex in Indian legal history: cases involving AI data scraping, Machine Unlearning demands, algorithmic discrimination, and the accountability of black box systems for data inaccuracies. These are not cases that a general civil court, however distinguished its bench, is well equipped to handle. The DPB, if properly constituted with members who combine legal expertise with technical competence, has the potential to develop a body of AI-specific data protection jurisprudence that will guide both enforcement and industry practice for decades.

The DPB's first major enforcement tests are expected to involve AI data scraping by both domestic and foreign platforms. The outcomes of these early cases will be defining: they will establish whether the DPDP Act's consent and accountability framework is a real constraint on AI companies or a paper tiger, and they will determine India's position in the global regulatory landscape as a serious data protection jurisdiction or one that prioritises innovation over individual rights.

Conclusion: The Invisible Algorithm Must Answer to the Visible Law

The DPDP Act 2023, implemented through the Rules of 2025 and operationalised through the Data Protection Board established in late 2025, is a genuine and important achievement in Indian data governance. It rests on the strongest possible constitutional foundation in the Puttaswamy judgment, introduces contemporary accountability standards, and creates institutional infrastructure capable of meaningful enforcement.

But it is not sufficient for the AI challenge it now faces. The consent and Notice framework, designed for identifiable and deliberate data collection, struggles to address the mass scraping and training processes of Generative AI. The Right to Erasure creates obligations that current AI technology cannot technically satisfy. The accountability framework for data accuracy is tested to its limits by the hallucination problem inherent in large language models. And the Black Box nature of modern AI creates transparency demands that the systems themselves cannot currently meet.

Three developments are certain as we look toward the rest of 2026 and beyond. The Data Protection Board will face its first major tests in cases involving AI data scraping and these cases will define the practical scope of the DPDP Act's consent requirements. Machine Unlearning will become the most significant area of legal-technical research in Indian data law, as the Right to Erasure encounters the realities of trained AI systems. And India will likely need a dedicated AI Governance Act, analogous in ambition if not identical in design to the EU AI Act, to complement the DPDP Act's data-centric framework with direct regulation of AI systems and their deployment.

The goal of Indian law is to create a Digital Nagrik, a digital citizen who is empowered, informed, and protected. The success of this vision depends on how effectively the legal system can hold the invisible algorithms of the data economy accountable to the visible law of the land. That accountability is not yet fully established. Building it is the defining task of Indian data law in the years ahead.

Frequently Asked Questions (FAQs) on the DPDP Act 2023 and AI Governance in India

  1. What is the DPDP Act 2023 and what does it govern? The Digital Personal Data Protection Act, 2023 is India's primary legislation governing the processing of personal data in digital form. It establishes obligations for Data Fiduciaries, rights for Data Principals, and creates the Data Protection Board of India as the regulatory and adjudicatory body.


  2. What constitutional provision underlies the DPDP Act? The DPDP Act rests on the recognition of the right to privacy as a fundamental right under Article 21 of the Constitution of India, established by the nine-judge bench in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1.


  3. Who is a Data Fiduciary under the DPDP Act and does this cover AI companies? A Data Fiduciary is any entity that determines the purpose and means of processing personal data. The DPDP Rules, 2025 clarify that processing includes AI training, fine-tuning, and prompt-response cycles, meaning AI companies processing Indian citizens' data are Data Fiduciaries subject to the Act's obligations.


  4. What are Significant Data Fiduciaries and what additional obligations apply to them? Significant Data Fiduciaries are entities designated by the government based on the volume and sensitivity of data they process and their potential impact on individuals. They are subject to enhanced obligations including Data Protection Impact Assessments, data protection officer appointments, and independent audits.


  5. What is the Machine Unlearning problem and why does it matter under the DPDP Act? Machine Unlearning refers to the technically difficult process of removing the influence of specific data points from a pre-trained AI model. Section 12 of the DPDP Act grants individuals the Right to Erasure, but current AI architectures generally cannot comply with individual erasure requests, creating a potential compliance gap that may attract penalties of up to Rs 250 crores.


  6. What is the Black Box problem in AI governance? The Black Box problem refers to the opacity of modern AI systems whose internal reasoning processes cannot be fully understood or explained by their developers. This creates tension with the DPDP Act's transparency and accountability requirements, particularly when AI systems generate false information about individuals, engaging Section 8's duty to ensure data accuracy.


  7. How does India's approach to AI regulation compare to the EU AI Act? India's DPDP Act regulates personal data and its processing, focusing on individual data sovereignty. The EU AI Act regulates AI systems themselves based on risk level. India's approach is more innovation-friendly but places a greater burden on individuals to enforce their rights, while the EU approach directly addresses systemic AI risks but may constrain innovation.


  8. What is the Data Protection Board of India and what role will it play in AI governance? The Data Protection Board of India was established in late 2025 as a specialised regulatory and adjudicatory body for data disputes. It will likely face its first major AI governance tests in cases involving data scraping and erasure demands against AI companies, with its early decisions expected to define the practical scope of the DPDP Act's application to AI.


Key Takeaways: Everything You Must Know About the DPDP Act 2023 and AI Governance in India

The DPDP Act 2023 and the DPDP Rules 2025 together constitute India's foundational data governance framework, resting on the constitutional right to privacy established in Puttaswamy v. Union of India (2017).

The Act establishes a Data Fiduciary accountability framework under which AI companies processing Indian citizens' data, including for training and fine-tuning purposes, are subject to consent, Notice, and accountability obligations.

The consent and Notice framework of Sections 5 and 6 creates significant legal uncertainty for AI training that relies on mass data scraping, as the specific, informed consent required by the Act has almost never been obtained for AI training purposes.

Significant Data Fiduciaries are subject to enhanced obligations including Data Protection Impact Assessments that must specifically address algorithmic bias risks.

The Right to Erasure under Section 12 creates the technically most challenging AI compliance obligation, as Machine Unlearning from pre-trained models is extremely difficult and in most cases currently infeasible.

The Black Box nature of modern AI creates tensions with the DPDP Act's transparency requirements and engages Section 8's strict accountability standard for data accuracy when AI systems hallucinate or generate false personal data.

The shift from no-fault to strict accountability under the DPDP Act, with penalties of up to Rs 250 crores for significant breaches, is the most consequential development in Indian technology law for the commercial AI sector.

India's data-centric regulatory approach differs fundamentally from the EU AI Act's technology-centric, risk-based approach; India's framework is more innovation-flexible but places a higher individual burden of rights enforcement.

The Data Protection Board of India, established in late 2025, will face its defining tests in 2026 through AI data scraping cases whose outcomes will determine the real-world scope of the DPDP Act's protections.

India will likely need a dedicated AI Governance Act to complement the DPDP Act's data-centric framework with direct regulation of AI systems, their risk levels, and their deployment in high-stakes domains.

References

The Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023): The primary legislation governing digital personal data protection in India, establishing the Data Fiduciary framework, consent requirements, Data Principal rights, and the Data Protection Board.

Digital Personal Data Protection Rules, 2025, notified by the Ministry of Electronics and Information Technology: The subordinate legislation implementing the DPDP Act, clarifying that AI training, fine-tuning, and prompt-response cycles constitute processing under the Act.

The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011: The predecessor regulatory framework for data protection in India, superseded in significant respects by the DPDP Act.

Justice K.S. Puttaswamy (Retd.) v. Union of India, [2017] 10 SCC 1: The nine-judge Supreme Court bench decision recognising the right to privacy as a fundamental right under Article 21 of the Constitution and establishing the proportionality framework for assessing interference with privacy.

Vinit Kumar v. Central Bureau of Investigation, [2019] SCC Online Bom 3155: The Bombay High Court decision on surveillance and privacy, providing judicial context for the constitutional protections applicable to digital data.

Ministry of Electronics and Information Technology, Report of the Committee of Experts on Non-Personal Data Governance Framework (2025): The government's policy framework for non-personal data governance, relevant to AI training on aggregated and anonymised datasets.

NITI Aayog, National Strategy for Artificial Intelligence: #AIforAll (Discussion Paper, updated 2025): The government's strategic framework for AI development, providing the policy context within which the DPDP Act's AI governance provisions must be understood.

Privacy in the Age of Algorithms, 14 Indian Journal of Law and Technology 45 (2025): Academic analysis of the intersection of algorithmic decision-making and privacy law under the emerging Indian data protection framework.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.



When the Algorithm Knows More About You Than Your Government Does: Understanding Why India's DPDP Act 2023 Is the Most Important Law You Have Never Read

Think of your personal data as a shadow. It follows you everywhere: every search you conduct, every purchase you make, every form you fill, every prompt you type into an AI system. Unlike a physical shadow, however, this digital one does not disappear when the light changes. It accumulates, is stored, is processed, is sold, and is fed into machine learning models that use it to predict, profile, and in some cases make consequential decisions about your life. Who governs this shadow? Until recently, in India, almost nobody did.

The notification of the Digital Personal Data Protection Act, 2023 and the subsequent implementation of the DPDP Rules in early 2025 represent India's most serious legislative attempt to answer that question. Together, they mark the transition from what legal scholars have described as a decade of data-privacy vacuum into a structured statutory regime that one might justifiably call India's Cyber Constitution.

But the timing of this transition is not coincidental with its greatest challenge. The same years that saw the DPDP Act's passage have also witnessed the most explosive growth of Generative Artificial Intelligence in history. Systems that train on billions of data points, generate content indistinguishable from human output, and make autonomous decisions affecting individual lives are now embedded in the commercial and governmental fabric of India. The question this article sets out to examine is whether the DPDP Act's framework is adequate to hold these invisible algorithms accountable to the visible law of the land.

The Constitutional Foundation: Why Puttaswamy Is the Bedrock of Everything That Follows

No analysis of the DPDP Act, 2023 can begin without acknowledging the constitutional foundation on which it rests. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, a nine-judge constitutional bench of the Supreme Court unanimously held that the right to privacy is a fundamental right under Article 21 of the Constitution of India. The judgment recognised, with particular prescience, that informational privacy, the right of individuals to control information about themselves, is a core dimension of this fundamental right.

The Puttaswamy judgment did not merely establish a constitutional right in the abstract. It established a framework within which any interference with that right must be assessed. Interference with privacy must satisfy three tests: it must be grounded in law, it must pursue a legitimate aim, and it must be proportionate to that aim. Every provision of the DPDP Act, every SDF guideline from MeitY, and every order of the Data Protection Board must be measured against these standards.

The significance of this constitutional foundation cannot be overstated. Unlike data protection frameworks in jurisdictions where privacy is a statutory right subject to legislative revision, India's DPDP Act rests on a fundamental right that cannot be abridged by ordinary legislation. This creates both a floor of protection that the Act must meet and a ceiling of interference that it must not breach.

The table below sets out the constitutional architecture within which the DPDP Act operates.

Constitutional Element

Content

Implication for DPDP Act

Article 21 (Right to Life and Personal Liberty)

Includes informational privacy as recognised in Puttaswamy

The DPDP Act must meet constitutional standards; it cannot authorise interference with privacy that fails the Puttaswamy proportionality test

Puttaswamy Proportionality Test

Interference with privacy must be legal, necessary, and proportionate

State exemptions under the DPDP Act must satisfy this test; overbroad exemptions are constitutionally vulnerable

Article 14 (Equality)

Equal protection of all persons before the law

DPDP protections must be available equally to all individuals regardless of their digital literacy or economic status

Article 19(1)(a) (Freedom of Expression)

Includes the right to receive and impart information

Tension between data protection and the free flow of information must be resolved through proportionality

The Legal Architecture of the DPDP Act 2023: What the Framework Actually Provides

The Digital Personal Data Protection Act, 2023, unlike its predecessor the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, introduces a conceptually coherent and contemporary framework for data governance. The 2011 Rules were primarily cybersecurity instruments drafted before the era of platform-scale data collection. The DPDP Act is specifically designed for a world in which personal data is the primary raw material of commercial and governmental activity.

The table below sets out the key structural elements of the DPDP Act and their significance.

Element

Provision

Content

Significance

Data Fiduciary

Definition and Sections 5 to 10

Any entity that determines the purpose and means of processing personal data

Establishes the primary locus of accountability; AI companies processing Indian citizens' data fall squarely within this definition

Consent framework

Sections 5 and 6

Consent must be free, specific, informed, and unconditional; Notice must be provided

Creates the core legal basis for data processing; challenges AI's mass data collection model

Data Principal Rights

Sections 11 to 14

Rights of access, correction, erasure, and grievance redressal

Empowers individuals to exercise control over their data; creates the legal basis for Machine Unlearning demands

Right to Erasure

Section 12

Data Fiduciary must erase personal data on request

Creates the most technically challenging obligation for AI systems

Significant Data Fiduciaries

Section 10 and DPDP Rules 2025

SDFs subject to enhanced obligations including DPIA, data protection officer appointment, and audits

Creates a risk-proportionate compliance tier for the largest and most consequential data processors

Data Protection Board

Section 18 onwards

Specialised regulatory and adjudicatory body for data disputes

Provides faster and more expert dispute resolution than traditional civil courts

Penalties

Section 33

Up to Rs 250 crores for significant breaches

Creates meaningful financial deterrence for non-compliance

Accountability

Section 8

Data Fiduciary must ensure accuracy of personal data

Shifts from no-fault to strict accountability; AI hallucination directly engages this provision

Data Fiduciaries in the Age of AI: When OpenAI and Google Are Processing Indian Citizens' Data

The concept of the Data Fiduciary is the most consequential structural element of the DPDP Act for the AI governance debate. Any entity that determines the purpose and means of processing personal data is a Data Fiduciary, and any entity that processes data on the instructions of a Fiduciary is a Data Processor. The DPDP Rules, 2025, have clarified that the term processing includes training, fine-tuning, and the prompt-response cycle of AI systems.

This clarification is significant in its practical implications. When OpenAI trains GPT models on data that includes content generated by Indian citizens, it is processing personal data. When Google processes Indian users' search queries, location data, and behavioural patterns through its AI systems, it is processing personal data. When a domestic Indian AI startup fine-tunes a model on scraped public data from Indian websites, it is processing personal data. All of these entities are Data Fiduciaries under the DPDP Act and are subject to its obligations.

The question of whether extraterritorial application of the Act to foreign AI companies is legally and practically enforceable is one of the most pressing unresolved questions in Indian data law. The Act's provisions extend to processing of personal data of Indian data principals regardless of where the processing takes place, but the mechanisms for enforcement against non-Indian entities remain to be tested in regulatory and judicial practice.

The table below illustrates the Data Fiduciary obligations as they apply to different categories of AI actors.

AI Actor

Data Fiduciary Status

Processing Activities Covered

Key Compliance Obligations

Large foreign AI platforms (OpenAI, Google, Meta)

Data Fiduciary for Indian data principals

Training, fine-tuning, inference, prompt-response cycles

Consent, Notice, Data Principal rights, potential SDF designation

Indian AI startups using scraped data

Data Fiduciary

Training on public and scraped data; model deployment

Same obligations as above; enforcement more immediately practicable

Indian businesses deploying third-party AI

Data Fiduciary for their users' data; Data Processor relationship with AI provider

User interaction data processed through AI systems

Accountability for AI provider's processing of user data

Government agencies using AI

Data Fiduciary with potential statutory exemptions

Processing of citizen data through AI systems

Subject to Act with potentially broad exemptions; constitutionality of exemptions subject to Puttaswamy test

Consent, Notice, and the AI Scraping Problem: Is Most AI Training Legally Untenable Under Indian Law?

Sections 5 and 6 of the DPDP Act establish the consent framework at the heart of the legislation. Consent must be free, specific, informed, and unconditional. Before obtaining consent, the Data Fiduciary must provide a Notice to the Data Principal that clearly specifies what data is being collected, the purpose for which it will be processed, and how the Principal may exercise their rights.

This framework creates an immediate and serious challenge for the AI training model as currently practised. Generative AI systems are trained on massive datasets that may contain billions of individual data points, sourced from web scraping, public databases, licensed datasets, and user interactions. The individuals whose data appears in these datasets have in almost no case provided the specific, informed consent that the DPDP Act requires. They have not received a Notice from the AI company specifying that their personal data will be used to train a large language model. They have not been given an opportunity to withhold consent or to exercise their rights before the processing took place.

This analytical gap is not a marginal compliance concern. It suggests that much of the AI training currently conducted on Indian citizens' data is operating on legally uncertain ground under the DPDP Act's consent framework. The Act does provide for legitimate uses other than consent, including processing for legal obligations, safety emergencies, and legitimate state functions, but it is not clear that commercial AI training falls within any of these categories.

The table below analyses the consent framework challenges specific to AI training.

Challenge

Description

Legal Implication Under DPDP Act

Mass scraping without Notice

AI systems train on billions of scraped data points without providing individual notices to data subjects

Potential violation of Section 6 Notice requirement for every individual whose personal data was scraped

Absence of specific consent

Consent for AI training is not obtained as a separate, specific act; web browsing terms of service do not constitute specific informed consent for model training

Sections 5 and 6 require consent to be specific to the purpose; general terms of service consent is insufficient

Retroactive consent impossibility

For already-trained models, the consent that should have been obtained before training cannot practically be obtained after the fact

Creates an ongoing compliance gap for existing models; potentially requires remediation through alternative legal basis

Purpose limitation

Data collected for one purpose cannot be used for another without fresh consent

Training AI models on data collected for other purposes requires a new legal basis or fresh consent

Cross-border processing

Data of Indian principals processed abroad may escape immediate enforcement

Creates regulatory gap that requires international regulatory cooperation to address

AI Governance and the Significant Data Fiduciary Framework: The 2025-26 Regulatory Developments

As of early 2026, MeitY has introduced specific guidelines for Significant Data Fiduciaries, a designation that applies to entities whose scale, sensitivity of data processing, or potential impact on individuals and national security places them in a category requiring enhanced regulatory oversight. The SDF framework represents the most important recent development in the operationalisation of the DPDP Act for AI governance.

Data Protection Impact Assessments: The DPIA as a Governance Tool

SDFs are required to conduct Data Protection Impact Assessments before deploying new data processing activities, including AI systems. A DPIA is not merely a compliance formality. It is a substantive analytical exercise that requires the Data Fiduciary to identify the privacy risks of a proposed processing activity, assess whether those risks are justified by the purpose of the activity, and implement measures to mitigate them.

For AI systems specifically, the DPIA framework creates an obligation to analyse algorithmic bias, the risk that an AI model may discriminate against individuals based on protected data categories. An AI hiring system that discriminates against women, or a credit scoring model that disadvantages members of particular communities, creates risks that a properly conducted DPIA should identify before the system is deployed. The DPIA requirement therefore functions as a pre-deployment safeguard against harms that might otherwise only be discovered after they have affected thousands or millions of individuals.

The Right to Erasure and the Machine Unlearning Problem

Section 12 of the DPDP Act grants every Data Principal the right to request erasure of their personal data. This right is conceptually simple but technically formidable when applied to AI systems. Once a data point has been incorporated into a trained AI model, removing the influence of that specific data point from the model's parameters, a process researchers call Machine Unlearning, is extremely difficult and in most cases practically impossible with current technology.

This creates one of the sharpest tensions in the DPDP Act's application to AI. If an Indian citizen invokes their right to erasure under Section 12 and the Data Fiduciary cannot remove the citizen's data from its AI model because Machine Unlearning is technically infeasible with the model's architecture, the Fiduciary is in potential violation of the Act, with penalties of up to Rs 250 crores for significant breaches.

The legal implication is clear and the challenge it creates is genuinely novel. Unlike conventional databases from which a record can simply be deleted, AI models do not store data as retrievable records. The training data is transformed into model weights and parameters that represent the statistical patterns in the original data rather than the data itself. Complying with erasure requests therefore requires either significant advances in Machine Unlearning technology, architectural decisions at the model design stage that build in erasure capacity, or a judicial or legislative determination that the Right to Erasure under Section 12 applies differently to AI models than to conventional databases.

The Black Box Problem: Accountability, Transparency, and AI Hallucination Under the DPDP Act

One of the most intellectually challenging dimensions of AI governance under the DPDP Act is what technologists call the Black Box problem. Modern large language models and deep neural networks are extraordinarily complex systems whose internal reasoning processes cannot be fully understood or explained even by their developers. When such a system makes a decision about an individual, whether recommending a loan approval, generating a profile, or flagging a communication for review, neither the developer nor the regulator can provide a complete explanation of how that decision was reached.

Transparency is a key requirement of the DPDP Act. The Notice and consent framework requires Data Fiduciaries to clearly explain the purpose of processing to Data Principals. The accountability framework requires Fiduciaries to be able to demonstrate compliance with the Act's requirements. Both of these requirements are in tension with the fundamental opacity of Black Box AI systems.

Section 8 of the Act places a direct duty on Data Fiduciaries to ensure the accuracy of personal data they process. This provision engages directly with the phenomenon of AI hallucination, the tendency of large language models to generate false, fabricated, or misleading information with apparent confidence. When an AI system generates a false profile of an individual, creates inaccurate financial assessments, or produces fabricated personal data, the Data Fiduciary is directly liable under Section 8. This shift from the no-fault environment that characterised India's previous IT framework to the strict accountability standard of the DPDP Act is perhaps the most significant development in Indian tech law in a generation.

The table below illustrates how the Black Box problem engages specific DPDP Act provisions.

AI Characteristic

DPDP Act Requirement

Tension

Opacity of decision-making

Transparency in Notice; ability to explain processing to Data Principals

Data Fiduciary cannot explain what it cannot itself understand

AI hallucination

Section 8 duty to ensure accuracy of personal data

Fiduciary is strictly liable for false data generated by its AI system

Automated profiling

Right of Data Principal to know about automated processing

Profiling through AI may occur without the Data Principal's knowledge

Algorithmic bias

DPIA requirement for SDFs to identify discrimination risks

Bias may be emergent from training data rather than intentional design

Inability to erasure data from trained models

Section 12 Right to Erasure

Technical impossibility of Machine Unlearning creates compliance gap

India Versus the EU AI Act: Two Philosophies of Technology Regulation

The contrast between India's data-centric approach under the DPDP Act and the European Union's technology-centric approach under the EU AI Act reflects a fundamental difference in regulatory philosophy that will shape the global governance of AI for years to come.

The EU AI Act, implemented in 2024-25, regulates AI systems themselves on the basis of their risk level. High-risk AI applications, those affecting employment, credit, law enforcement, and critical infrastructure, are subject to stringent requirements including transparency obligations, human oversight, and conformity assessments. Prohibited AI practices, including social scoring by governments and real-time biometric surveillance in public spaces, are banned outright. The Act's target is the technology and its use, not the data that feeds it.

India's DPDP Act takes a different starting point. Its primary focus is the sovereignty of the individual over their personal data. It regulates what can be done with personal data, who can do it, and on what legal basis, rather than regulating the AI systems that process that data. This approach is more flexible for innovation: an AI company can deploy almost any system it chooses as long as it processes personal data in compliance with the consent, Notice, and accountability framework of the Act.

The table below compares the two approaches and their implications.

Dimension

India DPDP Act 2023

EU AI Act 2024-25

Regulatory target

Personal data and its processing

AI systems and their deployment

Primary mechanism

Consent, Notice, Data Principal rights, accountability

Risk classification, conformity assessment, human oversight

Innovation flexibility

Higher; less direct regulation of AI systems themselves

Lower; high-risk AI faces significant pre-deployment requirements

Individual empowerment

High in theory; burden of proof on individual during grievances

High for prohibited practices; systematic protections for high-risk AI

Accountability model

Data Fiduciary accountability for processing outcomes

Developer accountability for system design and deployment

Gap for AI governance

Does not directly regulate AI systems; Black Box problem unaddressed

Addresses AI systems directly but may create compliance barriers for smaller innovators

Future convergence potential

India may need dedicated AI Governance Act to complement DPDP

EU approach may need data governance complement for coherent framework

India's approach is more conducive to the domestic AI startup ecosystem and avoids the prescriptive technology mandates that some argue chill innovation. However, it places a higher burden on individual Data Principals who must initiate grievance proceedings to enforce their rights, and it does not directly address the systemic risks of AI deployment in high-stakes domains that the EU AI Act specifically targets.

The Data Protection Board: A New Institutional Architecture for the Digital Age

The establishment of the Data Protection Board of India in late 2025 represents a crucial institutional development that complements the DPDP Act's substantive framework. The DPB provides a specialised forum for adjudicating digital data disputes, bypassing the delays and technical unfamiliarity of traditional civil courts.

The significance of the DPB for AI governance cannot be overstated. The cases it will face in 2026 and beyond will be among the most technically complex in Indian legal history: cases involving AI data scraping, Machine Unlearning demands, algorithmic discrimination, and the accountability of black box systems for data inaccuracies. These are not cases that a general civil court, however distinguished its bench, is well equipped to handle. The DPB, if properly constituted with members who combine legal expertise with technical competence, has the potential to develop a body of AI-specific data protection jurisprudence that will guide both enforcement and industry practice for decades.

The DPB's first major enforcement tests are expected to involve AI data scraping by both domestic and foreign platforms. The outcomes of these early cases will be defining: they will establish whether the DPDP Act's consent and accountability framework is a real constraint on AI companies or a paper tiger, and they will determine India's position in the global regulatory landscape as a serious data protection jurisdiction or one that prioritises innovation over individual rights.

Conclusion: The Invisible Algorithm Must Answer to the Visible Law

The DPDP Act 2023, implemented through the Rules of 2025 and operationalised through the Data Protection Board established in late 2025, is a genuine and important achievement in Indian data governance. It rests on the strongest possible constitutional foundation in the Puttaswamy judgment, introduces contemporary accountability standards, and creates institutional infrastructure capable of meaningful enforcement.

But it is not sufficient for the AI challenge it now faces. The consent and Notice framework, designed for identifiable and deliberate data collection, struggles to address the mass scraping and training processes of Generative AI. The Right to Erasure creates obligations that current AI technology cannot technically satisfy. The accountability framework for data accuracy is tested to its limits by the hallucination problem inherent in large language models. And the Black Box nature of modern AI creates transparency demands that the systems themselves cannot currently meet.

Three developments are certain as we look toward the rest of 2026 and beyond. The Data Protection Board will face its first major tests in cases involving AI data scraping and these cases will define the practical scope of the DPDP Act's consent requirements. Machine Unlearning will become the most significant area of legal-technical research in Indian data law, as the Right to Erasure encounters the realities of trained AI systems. And India will likely need a dedicated AI Governance Act, analogous in ambition if not identical in design to the EU AI Act, to complement the DPDP Act's data-centric framework with direct regulation of AI systems and their deployment.

The goal of Indian law is to create a Digital Nagrik, a digital citizen who is empowered, informed, and protected. The success of this vision depends on how effectively the legal system can hold the invisible algorithms of the data economy accountable to the visible law of the land. That accountability is not yet fully established. Building it is the defining task of Indian data law in the years ahead.

Frequently Asked Questions (FAQs) on the DPDP Act 2023 and AI Governance in India

  1. What is the DPDP Act 2023 and what does it govern? The Digital Personal Data Protection Act, 2023 is India's primary legislation governing the processing of personal data in digital form. It establishes obligations for Data Fiduciaries, rights for Data Principals, and creates the Data Protection Board of India as the regulatory and adjudicatory body.


  2. What constitutional provision underlies the DPDP Act? The DPDP Act rests on the recognition of the right to privacy as a fundamental right under Article 21 of the Constitution of India, established by the nine-judge bench in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1.


  3. Who is a Data Fiduciary under the DPDP Act and does this cover AI companies? A Data Fiduciary is any entity that determines the purpose and means of processing personal data. The DPDP Rules, 2025 clarify that processing includes AI training, fine-tuning, and prompt-response cycles, meaning AI companies processing Indian citizens' data are Data Fiduciaries subject to the Act's obligations.


  4. What are Significant Data Fiduciaries and what additional obligations apply to them? Significant Data Fiduciaries are entities designated by the government based on the volume and sensitivity of data they process and their potential impact on individuals. They are subject to enhanced obligations including Data Protection Impact Assessments, data protection officer appointments, and independent audits.


  5. What is the Machine Unlearning problem and why does it matter under the DPDP Act? Machine Unlearning refers to the technically difficult process of removing the influence of specific data points from a pre-trained AI model. Section 12 of the DPDP Act grants individuals the Right to Erasure, but current AI architectures generally cannot comply with individual erasure requests, creating a potential compliance gap that may attract penalties of up to Rs 250 crores.


  6. What is the Black Box problem in AI governance? The Black Box problem refers to the opacity of modern AI systems whose internal reasoning processes cannot be fully understood or explained by their developers. This creates tension with the DPDP Act's transparency and accountability requirements, particularly when AI systems generate false information about individuals, engaging Section 8's duty to ensure data accuracy.


  7. How does India's approach to AI regulation compare to the EU AI Act? India's DPDP Act regulates personal data and its processing, focusing on individual data sovereignty. The EU AI Act regulates AI systems themselves based on risk level. India's approach is more innovation-friendly but places a greater burden on individuals to enforce their rights, while the EU approach directly addresses systemic AI risks but may constrain innovation.


  8. What is the Data Protection Board of India and what role will it play in AI governance? The Data Protection Board of India was established in late 2025 as a specialised regulatory and adjudicatory body for data disputes. It will likely face its first major AI governance tests in cases involving data scraping and erasure demands against AI companies, with its early decisions expected to define the practical scope of the DPDP Act's application to AI.


Key Takeaways: Everything You Must Know About the DPDP Act 2023 and AI Governance in India

The DPDP Act 2023 and the DPDP Rules 2025 together constitute India's foundational data governance framework, resting on the constitutional right to privacy established in Puttaswamy v. Union of India (2017).

The Act establishes a Data Fiduciary accountability framework under which AI companies processing Indian citizens' data, including for training and fine-tuning purposes, are subject to consent, Notice, and accountability obligations.

The consent and Notice framework of Sections 5 and 6 creates significant legal uncertainty for AI training that relies on mass data scraping, as the specific, informed consent required by the Act has almost never been obtained for AI training purposes.

Significant Data Fiduciaries are subject to enhanced obligations including Data Protection Impact Assessments that must specifically address algorithmic bias risks.

The Right to Erasure under Section 12 creates the technically most challenging AI compliance obligation, as Machine Unlearning from pre-trained models is extremely difficult and in most cases currently infeasible.

The Black Box nature of modern AI creates tensions with the DPDP Act's transparency requirements and engages Section 8's strict accountability standard for data accuracy when AI systems hallucinate or generate false personal data.

The shift from no-fault to strict accountability under the DPDP Act, with penalties of up to Rs 250 crores for significant breaches, is the most consequential development in Indian technology law for the commercial AI sector.

India's data-centric regulatory approach differs fundamentally from the EU AI Act's technology-centric, risk-based approach; India's framework is more innovation-flexible but places a higher individual burden of rights enforcement.

The Data Protection Board of India, established in late 2025, will face its defining tests in 2026 through AI data scraping cases whose outcomes will determine the real-world scope of the DPDP Act's protections.

India will likely need a dedicated AI Governance Act to complement the DPDP Act's data-centric framework with direct regulation of AI systems, their risk levels, and their deployment in high-stakes domains.

References

The Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023): The primary legislation governing digital personal data protection in India, establishing the Data Fiduciary framework, consent requirements, Data Principal rights, and the Data Protection Board.

Digital Personal Data Protection Rules, 2025, notified by the Ministry of Electronics and Information Technology: The subordinate legislation implementing the DPDP Act, clarifying that AI training, fine-tuning, and prompt-response cycles constitute processing under the Act.

The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011: The predecessor regulatory framework for data protection in India, superseded in significant respects by the DPDP Act.

Justice K.S. Puttaswamy (Retd.) v. Union of India, [2017] 10 SCC 1: The nine-judge Supreme Court bench decision recognising the right to privacy as a fundamental right under Article 21 of the Constitution and establishing the proportionality framework for assessing interference with privacy.

Vinit Kumar v. Central Bureau of Investigation, [2019] SCC Online Bom 3155: The Bombay High Court decision on surveillance and privacy, providing judicial context for the constitutional protections applicable to digital data.

Ministry of Electronics and Information Technology, Report of the Committee of Experts on Non-Personal Data Governance Framework (2025): The government's policy framework for non-personal data governance, relevant to AI training on aggregated and anonymised datasets.

NITI Aayog, National Strategy for Artificial Intelligence: #AIforAll (Discussion Paper, updated 2025): The government's strategic framework for AI development, providing the policy context within which the DPDP Act's AI governance provisions must be understood.

Privacy in the Age of Algorithms, 14 Indian Journal of Law and Technology 45 (2025): Academic analysis of the intersection of algorithmic decision-making and privacy law under the emerging Indian data protection framework.

Disclaimer

This article is published by CLEAR LAW (clearlaw.online) strictly for educational and informational purposes only. It does not constitute legal advice, legal opinion, or any form of professional counsel, and must not be relied upon as a substitute for consultation with a qualified legal practitioner. Nothing contained herein shall be construed as creating a lawyer-client relationship between the reader and the author, publisher, or CLEAR LAW (clearlaw.online).

All views, interpretations, and conclusions expressed in this article are solely those of the author and represent independent academic analysis. CLEAR LAW (clearlaw.online) does not endorse, verify, or guarantee the accuracy, completeness, or reliability of the content, and expressly disclaims any responsibility for the same.

While reasonable efforts are made to ensure that the information presented is accurate and up to date, no warranties or representations, express or implied, are made regarding its correctness, adequacy, or applicability to any specific factual or legal situation. Laws, regulations, and judicial interpretations are subject to change, and the content may not reflect the most current legal developments.

To the fullest extent permitted by applicable law, CLEAR LAW (clearlaw.online), the author, editors, and publisher disclaim all liability for any direct, indirect, incidental, consequential, or special damages arising out of or in connection with the use of, or reliance upon, this article.

Readers are strongly advised to seek independent legal advice from a qualified professional before making any decisions or taking any action based on the contents of this article. Reliance on any information provided in this article is strictly at the reader's own risk.

By accessing and using this article, the reader expressly agrees to the terms of this disclaimer.