Deepfakes and India’s Legislative Imperative 

THE EPISTEMOLOGICAL CRISIS AND THE COLLAPSE OF SHARED REALITY 

The emergence of deepfakes marks a civilizational rupture in how truth is perceived, verified, and institutionalized. For centuries, law and society have operated on the assumption that recorded audiovisual material photographs, videos, and voice recordings constitute reliable representations of reality. Courts treated visual evidence as the “gold standard” of proof, while democratic discourse relied upon recorded speech to anchor public debate. Deepfakes fundamentally destabilize this epistemic foundation by introducing a permanent shadow of doubt over all recorded material. The result is not merely deception, but what may be described as the collapse of a shared reality.1 

From a philosophical standpoint, truth has functioned as a collective social agreement rather than an absolute metaphysical certainty. The legal system operationalizes this agreement through evidentiary rules, standards of proof, and presumptions of authenticity. Deepfakes rupture this framework by rendering falsification indistinguishable from authenticity. This produces the phenomenon known as the “liar’s dividend,” wherein perpetrators of real wrongdoing can plausibly deny authentic evidence by claiming it to be synthetic.2 The danger, therefore, lies not only in believing falsehoods, but in disbelieving truths. 

When this denial becomes normalized, society enters a state of epistemic nihilism a condition in which citizens cease to believe in the possibility of objective verification altogether. In such a climate, accountability collapses, institutional trust erodes, and democratic deliberation becomes impossible. For India, a constitutional democracy predicated on reasoned discourse and rule of law, this epistemological breakdown constitutes a direct threat to constitutional governance.3 

FEMINIST AND VICTIM-CENTRIC LEGAL THEORY 

The gendered impact of deepfakes necessitates a feminist legal lens. Traditional criminal law frameworks conceptualize harm in terms of physical injury or tangible loss. Synthetic sexual violence, however, operates through reputational destruction, psychological trauma, and social exclusion. These harms are intensified in patriarchal social contexts where female sexuality is policed through communal honor.4 

Indian jurisprudence has increasingly recognized dignity and privacy as intrinsic to personhood under Article 21 of the Constitution5. For victims of non-consensual intimate imagery, the harm materializes the moment content is circulated, not when its falsity is later established. The law’s fixation on authenticity thus becomes irrelevant; social punishment precedes legal vindication. This gap highlights the inadequacy of truth-based defenses in cases of synthetic abuse. 

A victim-centric framework would invert the burden of harm. Instead of requiring victims to prove falsity or intent, the law must presume harm from unauthorized synthetic sexual content. This shift aligns with constitutional morality and international human rights standards, recognizing that consent not authenticity is the normative anchor of sexual autonomy. 

DEEPFAKE-ENABLED FINANCIAL CRIME AND SYSTEMIC RISK THEORY 

The rise of deepfake-driven financial fraud challenges traditional notions of individual criminal liability. Economic offences enabled by AI are rarely isolated acts; they operate as distributed networks exploiting systemic vulnerabilities. Voice cloning and video impersonation transform trust-based verification mechanisms into attack vectors, rendering conventional fraud detection obsolete. 

From a regulatory perspective, this constitutes systemic risk. When authentication protocols across banks, fintech platforms, and government services rely on biometric or audiovisual verification, deepfakes threaten the integrity of the entire financial ecosystem. The projected losses associated with Jamtara 2.0 must therefore be understood as early indicators of structural fragility rather than episodic cybercrime. 

A policy-oriented response requires harmonization between cyber law, banking regulation, and consumer protection. Mandatory multi-factor authentication standards, liability allocation frameworks, and AI-specific risk audits must complement criminal sanctions. Without such integration, enforcement will remain perpetually reactive. 

REIMAGINING INTERMEDIARY LIABILITY IN THE AGE OF GENERATIVE AI 

The intermediary liability framework under Section 79 of the IT Act was designed for passive hosting environments. Generative AI platforms fundamentally alter this paradigm. When platforms provide tools that actively generate content, recommend outputs, or algorithmically amplify engagement, the distinction between intermediary and publisher becomes conceptually unstable. 

The 2025 amendments partially address this by conditioning safe harbour on compliance with detection and labeling obligations. However, a deeper doctrinal recalibration is necessary. Liability must be linked not only to knowledge, but to architectural design choices. Platforms that deploy systems optimized for virality without safeguards effectively externalize harm onto users and victims. 

A future-proof framework would adopt a duty-of-care model, requiring platforms to anticipate foreseeable misuse and implement proportionate safeguards. This approach aligns with emerging global norms and preserves innovation while internalizing social costs. 

INSTITUTIONAL DESIGN 

The proposal for a National Deepfake Grievance Office represents a critical institutional innovation. Local police stations often lack the technical expertise to assess synthetic media, resulting in delayed or dismissed complaints. A centralized, specialized body could provide rapid forensic assessment, coordinate platform takedowns, and issue interim relief orders. 

Such an institution should function with quasi-judicial powers, enabling it to order immediate takedowns, mandate preservation of evidence, and recommend compensation. Integration with fast-track cyber courts would ensure procedural continuity, while victim support units could address psychological and social fallout. Institutional specialization is essential to translate legislative intent into effective protection. 

DETECTION SOVEREIGNTY AND ETHICAL AI AS NATIONAL SECURITY IMPERATIVES 

Detection sovereignty must be understood as a strategic objective. Reliance on foreign-developed detection tools creates vulnerabilities related to data access, cultural bias, and geopolitical dependence. India’s linguistic diversity further complicates detection, as many global tools are optimized for Western accents and datasets. 

A National Mission on Ethical AI should prioritize the development of open-source, multilingual deepfake detection systems calibrated to Indian socio-cultural contexts. Such systems would enhance regulatory enforceability, support judicial processes, and strengthen public trust. Ethical AI development is thus inseparable from democratic resilience. 

THE TECHNICAL ARCHITECTURE OF DECEPTION FROM GANS TO DIFFUSION MODELS 

Understanding the legal implications of deepfakes requires a basic engagement with their technical architecture. Early deepfake systems were built using Generative Adversarial Networks, or GANs, which consist of two competing neural networks: a Generator that creates synthetic images or audio, and a Discriminator that attempts to identify flaws distinguishing the synthetic output from real data. Through continuous adversarial training, the Generator progressively eliminates detectable errors, producing outputs that are increasingly indistinguishable from reality to the human eye and ear. 

More recent advances have shifted toward diffusion models, which generate high-fidelity images and videos by progressively denoising random noise into structured outputs. These models require less curated training data and produce significantly more realistic results, particularly in facial expressions, lighting, and motion continuity. The legal significance of this shift lies in the declining reliability of traditional forensic markers used to detect manipulation. 

Equally important is the accessibility gap. Deepfake creation has migrated from specialized research laboratories to mass-deployable smartphone applications. Today, a malicious actor requires neither technical expertise nor significant resources to produce convincing synthetic media. This democratization of deception dramatically increases the scale and velocity of harm, overwhelming both legal remedies and platform moderation mechanisms. 

INDIA’S SOCIO-POLITICAL VULNERABILITY 

India’s digital ecosystem presents a uniquely fertile ground for deepfake proliferation. With over 800 million internet users and deep penetration of low-cost smartphones, synthetic media can reach millions within minutes. This scale is compounded by India’s extraordinary linguistic diversity. Deepfakes localized into regional languages and dialects can incite communal tension or political misinformation long before national fact-checkers become aware of their existence. Linguistic localization thus enables hyper-targeted manipulation with disproportionate impact. 

The problem is further exacerbated by the architecture of “dark social” platforms such as WhatsApp. Unlike public-facing platforms, WhatsApp operates through encrypted, trust-based networks of family and friends. Content circulated within these closed groups carries an inherent presumption of authenticity, significantly amplifying the persuasive power of deepfakes. The encrypted nature of these platforms also renders real-time detection and intervention nearly impossible. 

The 2024 General Elections demonstrated the political implications of these dynamics. Hyper-realistic synthetic videos were used to fabricate speeches, misrepresent policy positions, and even resurrect deceased political leaders for artificial endorsements. This raises profound ethical and legal questions concerning digital legacy. Does a deceased individual retain posthumous rights over their voice and likeness, or may political actors indefinitely deploy AI-simulated representations for strategic gain? Indian law presently offers no coherent answer. 

THE GENDERED NATURE OF SYNTHETIC VIOLENCE 

Deepfake harms are not distributed evenly across society. Women are disproportionately targeted through non-consensual intimate imagery, a form of digital violence that weaponizes sexuality to enforce social control. In many parts of India, the mere circulation of an intimate video regardless of its authenticity can result in irreversible social ostracization, loss of livelihood, and severe psychological trauma. The law’s focus on eventual disproval fails to account for the immediacy and permanence of social harm. 

The emergence of “clothed-to-nude” AI tools represents a catastrophic failure of existing privacy frameworks. These tools do not require stolen or hacked images; they operate entirely on publicly available social media photographs. This renders traditional consent-based data protection regimes ineffective, as no unlawful data acquisition occurs. The harm arises entirely at the level of synthetic output, exposing the inadequacy of laws that regulate data inputs but ignore algorithmic manipulation. 

ECONOMIC WARFARE AND THE RISE OF ‘JAMTARA 2.0’ 

Beyond social and political harm, deepfakes pose a systemic economic threat. The phenomenon popularly described as “Jamtara 2.0” reflects the evolution of financial fraud through AI-enabled deception. Voice cloning technologies require as little as three seconds of audio to replicate a person’s speech patterns. Such samples are readily available through social media, YouTube videos, or voice notes, enabling fraudsters to bypass voice-based authentication systems with alarming ease. 

The expansion of Video KYC mechanisms in Indian banking further intensifies this vulnerability. Deepfake technology can spoof live video feeds, undermining identity verification at scale. The projected loss of over ₹70,000 crore by late 2025 underscores that this is not a peripheral cybercrime issue, but a threat to the integrity of India’s financial infrastructure. 

CRITIQUING INDIA’S EXISTING LEGAL FRAMEWORK 

The Information Technology Act, 2000, was drafted for an era of static content and human authorship. Its reliance on intent-based offenses under Sections 66C and 66D creates a fatal enforcement gap. When creators characterize deepfakes as satire or parody, establishing fraudulent intent becomes exceedingly difficult, even when real-world harm is severe. The law thus privileges expressive intent over consequential harm. 

The Digital Personal Data Protection Act, 2023, suffers from a different structural limitation. By focusing on data inputs rather than content outputs, it fails to address the core harm caused by deepfakes. The exemption for publicly available personal data significantly weakens protection for public figures, influencers, and journalists, leaving them particularly vulnerable to synthetic abuse. 

COMPARATIVE REGULATORY MODELS, LESSONS FOR INDIA 

Globally, regulatory responses to deepfakes have diverged. The European Union’s AI Act adopts a risk-based framework emphasizing transparency and the right to know, particularly through mandatory labeling of synthetic content. China’s Deep Synthesis Regulations impose stringent requirements including digital watermarking and user identity verification, though at the cost of heightened state surveillance. The United States has pursued a fragmented, sector-specific approach, addressing deepfakes through election law, pornography statutes, and consumer protection. 

India is in the process of synthesizing these models, seeking to balance constitutional freedoms with enforceable safeguards. 

THE 2025 MEITY AMENDMENTS AND MANDATORY TRACEABILITY 

The 2025 amendments to the IT Rules mark a significant shift by introducing the concept of Synthetically Generated Information. The mandatory traceability regime, including the 10% visible labeling requirement, represents an attempt to operationalize the right to know. Crucially, the amendments weaponize safe harbour by conditioning intermediary immunity on compliance with detection and labeling obligations. This reorients platform responsibility from passive hosting to active governance. 

THE CASE FOR A STANDALONE DEEPFAKE PREVENTION ACT 

Despite recent reforms, fragmented regulation remains insufficient. A standalone Deepfake Prevention Act would provide technological specificity and doctrinal clarity. Such legislation should recognize digital forgery as a distinct offense under the Bharatiya Nyaya Sanhita, mandate two-hour takedowns for non-consensual sexual content, and establish specialized institutions such as a National Deepfake Grievance Office and fast-track cyber courts. These measures would shift the legal focus from abstract liability to victim-centric redressal. 

CONCLUSION 

Deepfakes represent a paradigmatic challenge to law, democracy, and social trust. They destabilize epistemic certainty, weaponize identity, and exploit systemic vulnerabilities. India’s evolving regulatory response from traceability mandates to proposed standalone legislation reflects growing recognition of the threat. Yet regulation must remain anchored in constitutional values, victim-centric justice, and technological realism. 

The ultimate objective is not to eliminate deception, an impossible task, but to restore the conditions under which truth remains verifiable, harm remains redressable, and accountability remains enforceable. In confronting deepfakes, India confronts the future of democratic governance itself. 


Author: Pratibha Ganvir


Leave a comment