
Deep fake, fueled by rapid development of artificial intelligence (AI) and machine learning, has become a double-edged sword in the cyber era. Although it is creative and helpful, it has huge risks to the individual rights, public confidence, and the security of a country. As the production of increasingly realistic synthetic images, videos and audio of people saying and doing things they never did becomes possible, new forms of harassment, fraud, political manipulation and non-consensual intimate imagery have been enabled.[1] Such deep fakes, along with other synthetic media, could “undermine public trust in authentic content” at a time of important periods such as elections, while also cause significant emotional and reputational damage to victims. In that country, the onslaught of deep fakes on social media platforms has created the potential for sensational cases in which deep faked public figures were circulated on the internet, creating confusion or even, in some cases, riots.[2]
Without laws specifically addressing the production or sharing of deep fakes, however, the police and judiciary have been forced to work within existing patchwork rules about privacy, defamation, identity theft and obscenity to prosecute those responsible. This Article assesses the existing Indian legal landscape on deep fakes and highlight some of the key challenges associated with privacy, consent, impersonation, cyberbullying, and disinformation, and compare India’s response to the legislative responses in the United States and the regulatory measures employed in China. It ends with suggestions for creating appropriate regulation that considers free speech along with strong protections against misuse of deep fake technology.
Legal Framework in India
Although the Constitution of India provides the right to speech and expression under Article 19 (1) (a), there are several “reasonable restrictions” on it for reasons including security, public order, defamation, or morality.[3] The right to privacy found under Article 21 was identified as a fundamental right in the landmark case K.S. Puttaswamy v. Union of India. This gives a constitutional backing against any sort of privacy infringement-from a state party or a private one.[4] No law specifically deals with deepfake technologies. Instead, the Information Technology Act, 2000 (IT Act), and its amendments are invoked in such cases. Section 66C penalizes identity theft by electronic means; Section 66D punishes cheating by way of impersonation using computer resources.[5] There is the prohibition against mischief, meaning someone may press charges against perpetrators who create and distribute private images.[6] Sections 67A and 67B focus on sexually explicit content and child pornography, which include revenge pornography deepfakes[7].
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, require online platforms to remove unlawful content upon notice and establish complaint mechanisms. The Bharatiya Nyaya Sanhita (BNS), 2023, India’s new criminal code that replaces sections of the IPC, includes offenses relevant to deepfakes: Section 351 punishes criminal intimidation, Section 356 addresses defamation by electronic means, and Section 77(1) criminalizes voyeurism in private acts[8]. The Digital Personal Data Protection Act, 2023 (DPDP Act) statute requires informed consent for the processing of personal data, which includes personal data such as biometric data used to create deepfakes . While laws allow for the prosecution of fraud, defamation and privacy violations in relation to deepfakes, none of them were drafted to regulate synthetic media specifically, resulting in uncertainty in the law and challenges for enforcement.[9]
Key Legal Challenges
The application of existing laws to deepfake technology highlights many challenges. First, privacy and consent issues occur when a person’s likeness is used without permission. Although Section 66E of the IT Act makes unauthorized capture of private images illegal, deepfakes made from publicly available photographs may escape this rule[10]. The DPDP Act’s consent requirements cover biometric data but do not clarify the scope of consent for AI-generated content[11].
The absence of a cohesive “right of publicity” statute in India leaves victims wading through various statutes in search of justice, often slowly and inconsistently. Secondly, deepfakes enable impersonation and fraud that threaten economic and national security. Sections 66C and 66D target identity theft and cheating by impersonation, but proving causation and linking synthetic content to offenders requires advanced digital forensics and international cooperation[12]. Third, cyberbullying and harassment through non-consensual deepfake pornography or threatening videos involve a complex mix of obscenity laws (Sections 67A–B), defamation (Section 356), and harassment under the BNS[13]. Victims often face stigma and technical challenges in proving that the content is fake. Fourth, deepfakes increase misinformation and public order issues. While the BNS punishes statements that lead to public mischief, the vague language and high burden of proof limit its effectiveness[14]. India’s reactive takedown system relies on intermediary liability rules and often fails to stop harmful content from going viral[15]. Lastly, enforcement varies across states, with law enforcement and cybercrime units lacking adequate training and resources to detect AI-generated manipulations[16]. These issues highlight the need for laws designed specifically for the unique qualities of deepfake technology.
Comparative International Perspectives
Around the world, different regions are testing various regulatory options to deal with deepfakes. In the United States, the proposed Deep Fakes Accountability Act would require clear labeling of AI-generated media and create federal offenses for distributing malicious deepfakes, with penalties of up to five years in prison. Several state laws, including those in California and Texas, ban creating and sharing election-related deepfakes or non-consensual pornographic deepfakes under certain time frames and conditions[17]. However, critics warn these laws might suppress free speech.
In China, the Cyberspace Administration’s Provisions on the Administration of Deep Synthesis (2023) enforce strict rules; service providers must verify users, label all AI-generated content, and quickly remove unauthorized deepfakes[18]. Violators can face license revocation and heavy fines. China’s approach is proactive and technology-oriented, unlike India’s complaint-driven, reactive strategy[19]. The draft AI Act in the European Union addresses deepfakes as high-risk AI systems under obligations for transparency and compliance assessments. Such international case studies provide valuable insights into how to foster trust while recognizing accountability, given that labeling is mandatory and there are defined frameworks for governing AI.
Evaluation of Legal Adequacy
While India’s legislation currently permits potential prosecutions for some harms or injuries from deepfake-related conduct, it does not readily address the unregulated proliferation of synthetic media. Connecting the dots under existing laws — fraud, defamation, and privacy laws — each with their elements and evidence, presents a challenge. Deepfakes can be quickly altered and shared anonymously across borders, outpacing traditional investigation techniques[20]. Section 79’s safe-harbor rule discourages proactive platform monitoring, while enforcement agencies often lack the AI expertise needed to identify deepfakes in a timely manner. The DPDP Act’s general data processing principles offer minimal protection but do not tackle the specifics of biometric synthesis and AI-generated replicas. Compared to China’s specific labeling and user-authentication rules, and the federal offense framework suggested in the U.S., India’s system remains reactive and fragmented. Without appropriately focused legislative action, we see victims go through protracted, expensive legal actions without much incentive for platforms to develop proactive detection technology. These trends demonstrate an existing gap between the rapid evolution of AI and outdated approaches to legislation, ultimately impacting individual rights and diminishing public trust.
Not only are there obvious legal gaps but there are also considerable challenges around ascertaining evidence and ensuring capacity in the legal system. Deepfakes are designed to resemble recent recordings to implement the deceit, so the presenter often leaves no visible signs of alteration that an untrained eye would note. As such, in India, courts and law enforcement now need to conduct a steep learning curve analyzing synthetic media, managing the chain of custody for digital evidence, and producing a credible expert witness.[21] Many state cybercrime units lack dedicated forensic labs capable of running deepfake detection algorithms or conducting detailed video analysis[22]. Without appropriately focused legislative action, we see victims go through protracted, expensive legal actions without much incentive for platforms to develop proactive detection technology. These trends demonstrate an existing gap between the rapid evolution of AI and outdated approaches to legislation, ultimately impacting individual rights and diminishing public trust. While there may be tools in the academic or private sector that may be used, part of being admissible under the Indian Evidence Act (Sections 65A–B) is being able to verify the validity and accuracy of the tools, and whether the analyst is qualified to be using the tools. The great majority of prosecutors and judges are not aware of the verification process. Additionally, deepfakes could be modified and quickly re-uploaded to another URL, complicating the preservation of the evidence and making connections with an offender difficult.
Without best practices to handle digital evidence such as metadata extraction, hash validation, and expert identification, many deepfake cases are likely to be dismissed due to inadequate reliable evidence. A cooperative response is needed: fund forensic labs, provide training on AI forensics for police and judges, and publish guidelines outlining basic required technical standards for evidence that included an analysis of the deepfake. Strengthening the evidentiary basis will help the Indian courts provide timely and trustworthy decisions in relation to deepfakes.
Recommendations
Addressing deepfakes will require a comprehensive approach. To begin with, India will have to develop specific laws that prohibit the making and/or dissemination of malicious or non-consensual deepfakes. Definitions must be straightforward, and there must be structured penalties. These laws should require informed consent for the use of personal likeness and mandate visible labeling or watermarking of AI-generated content, borrowing from the U.S. Deep Fakes Accountability Act and China’s Deep Synthesis regulations[23][24]. Second, amendments to Section 79 of the IT Act should require intermediaries to use AI detection tools and to remove confirmed deepfakes within a short time frame, or risk losing their safe-harbor protections[25]. Third, the DPDP Act or related rules should explicitly categorize biometric identifiers as sensitive data that need explicit consent for synthetic reproduction[26]. Fourth, the government must commit to enhancing the capabilities of law enforcement and the judiciary, consistent with investments in AI forensics and dedicated cybercrime response capability. Fifth, campaigns and educational efforts must be launched to improve media literacy and encourage reporting on suspected deepfakes. Finally, India must participate in international dialogue and treaty discussions regarding AI regulation, formulating a basis for cooperative international efforts in respect of detecting deepfakes, sharing evidence, and extradition of offenders. Conclusion: In addition to making legislative or technology-focused innovations, India will need to respond to the risk posed in terms of deepfakes incident response and national security. Malicious deepfakes could undermine trust in democratic institutions by simulating fake speeches or emergency messages during elections, riots, or international tensions[27].
A deepfake of a high-level politician could create mass panic or mobilize retaliatory action if circulated in large enough numbers, especially in populations with poor internet access to help verification. As such, the National Crisis Management Committee (NCMC) and the Election Commission of India should develop response protocols for the different types of threats generated by deepfakes. To this end, the plan should include a response with a designated chain of command to be able to verify if the media being circulated is real or manipulated; notice to the population through the designated channels – government agencies; working with telecom service providers and internet companies to slow down or block the relevant identified synthetic media. Cybersecurity organizations, like CERT-In, could also proactively develop an alert system called “Deepfake Alert Network” that takes collected suspected deepfakes, run a cross check against authentic media sources at specific times, and send out real-time alerts[28]. National security organizations and military intelligence organizations should also work with the Ministry of Information and Broadcasting, which already has some responsibilities in battleground misinformation attacks or misinformation attacks at the state level, to develop contingency protocols that also include practice simulation exercises to assess readiness and responses to mass attacks on credibility. If the above protocols were included as part of India’s national security architecture, it would allow deepfake crisis threats and situations to be contained as rapidly and openly as possible while maintaining public order and respect for the public trust.
Conclusion
Ultimately, legal and technical responses must be grounded within a broader AI governance framework with an ethical foundation. Deepfakes are but one form of synthetic media technology, and with the anticipated growth, India must establish a permanent, multi‐stakeholder AI Advisory Council, or similar governance structure involving government regulators, digital rights NGOs, academic experts, industry representatives, etc. This Council would be responsible for continuous monitoring of emerging threats (e.g., deepfakes), and noticing when rules need to be updated and in what manners. It could issue “best‐practice” recommendations for the reasonable use of AI, host annual hackathons to promote deepfake response to spur innovations in detection, and serve as a master registry of certification authorities who endorse watermarking or provenance standards. By convening technologists, ethicists, and lawmakers, India can provide a vehicle for continuous stakeholder dialogue about AI technology in order to keep the pace of regulation from lagging behind. This collaborative governance scheme not only would improve India’s legal toolbox to respond to deepfakes, but foster a more healthy digital ecosystem for future innovation, while safeguarding individual rights.
[1] V. Smith, “Deepfake Risks and Ethical Considerations,” Journal of AI Security, vol. 5, no. 2, pp. 45-62, 2024
[2] Press Information Bureau, Government of India, “Advisory on Synthetic Media,” Apr. 4, 2025
[3] India Constitution, 1945, art. 19(1)(a)
[4] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1
[5] Information Technology Act, No. 21 of 2000, Sec.66C, 66D, 66E
[6] Information Technology Act, No. 21 of 2000, Sec. 66E
[7] Information Technology Act, No. 21 of 2000, Sec. 67A–67B
[8] IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[9] Digital Personal Data Protection Act, 2023
[10] Information Technology Act, No. 21 of 2000, Sec. 66E
[11] Digital Personal Data Protection Act, 2023
[12] Information Technology Act, No. 21 of 2000, Sec.66C, 66D, 66E
[13] Bharatiya Nyaya Sanhita, 2023, Sec.351, 356, 77(1)
[14] Bharatiya Nyaya Sanhita, 2023, Sec. 353
[15] IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[16] VIF Cybersecurity Report, “Law Enforcement and Deepfakes,” 2024
[17] Deep Fakes Accountability Act, H.R.3230, 116th Cong. (2019)
[18] Cal. Gov’t Code Sec. 8314; Tex. Penal Code Sec. 33.07
[19] Cyberspace Administration of China, “Provisions on Deep Synthesis of Online Information,” Jan. 2023
[20] A. Johnson, “Deepfake Proliferation and Legal Responses,” International Journal of Cyber Law, vol. 12, no. 1, pp. 1-25, 2023
[21] Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473
[22] CERT-In, Guidelines on Digital Evidence Management 4 (2023)
[23]Deep Fakes Accountability Act, H.R.3230, 116th Cong. (2019)
[24] A. Johnson, “Deepfake Proliferation and Legal Responses,” International Journal of Cyber Law, vol. 12, no. 1, pp. 1-25, 2023
[25] IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[26] Digital Personal Data Protection Act, 2023
[27] Election Commission, Model Code of Conduct, Rule 29 (1961)
[28] CERT-In Advisory on Synthetic Media, Apr. 4, 2025
Author: Khushi Jain
