
In the light of celebrity videos being circulated where deep fake technology is used to seamlessly morph bodies to faces, it is important to comprehend this phenomenon along with the pros and cons that come along with it.
“Deepfakes” refer to synthetic media content created using artificial intelligence (AI) techniques. Advanced algorithms analyze and manipulate existing images, videos or audio to produce highly realistic (and often deceptive) content.[1] The goal of deepfake technology is create synthetic data that resembles reality, but has some aspect of the content manipulated.
Deepfake technology is based on two techniques[2] deep learning and generative adversarial networks (GAN’s). Deep learning is a branch of machine learning that processes and analyses vast volumes of data using artificial neural networks—algorithms that are inspired by the composition and operations of the brain. Numerous fields, including computer vision, robotics, speech recognition, and natural language processing, have benefited from the use of deep learning. A type of deep learning architecture known as generative adversarial networks (GANs) trains on a dataset to produce new, synthetic data that is similar to the original data using two neural networks, a discriminator and a generator. The generator creates fake samples while the discriminator assesses the authenticity of the generated samples and the real samples from the training dataset. The two networks are trained in an adversarial manner, where the generator tries to generate samples that can fool the discriminator, while the discriminator tries to correctly distinguish the generated samples from the real ones. This process continues until the generator is able to produce highly realistic synthetic data.[3]
CREATION OF DEEPFAKES
Deepfakes are created using a machine learning technique known as Generative Adversarial Networks (GAN). A GAN consists of two neural networks, namely a generator and a discriminator. These two networks are trained in a large sample size of real images, videos or audio. The generator creates synthetic data that resembles real data. Then, the discriminator checks if the fake data is real or not and tells the generator how to improve the quality of data. This keeps going until one can’t distinguish between what is real and fake.
This training data[4] is used to create deepfakes which may be applied in various ways for video and image deepfakes:
(a) Face swap: swap over one person’s face with the one in the video
(b) Attribute editing: alter the individual in the video’s appearance, such as their hairstyle or colour;
(c) Face re-enactment: changing the facial expressions from the face of one person on to the person in the target video; and
(d) Fully synthetic material: People’s appearances are trained using real material, but the final image is completely artificial.
IDENTIFICATION OF DEEPFAKES
Deepfake technology is constantly improving. Therefore, it is users must be able to identify content that is using deepfake technology to manipulate them.
Presently, the most effective method for verifying if a piece of media is a deepfake involves a combination of various identification techniques and exercising caution with any content that appears extremely convincing.
- Audio – visual mismatches – In certain deepfakes, there might be discrepancies between the audio and visual elements, indicating potential manipulation. For instance, the lip movements in a deepfake video might not synchronize perfectly with the audio, or the audio might include background noise or echoes absent in the video[5]. Such disparities between audio and visual cues can suggest content manipulation.
- Visual Anomalies – Certain deepfakes exhibit noticeable visual anomalies, like awkward facial movements or irregular blinking, which can signal their inauthenticity. These visual artifacts may stem from various factors, such as limitations in the training data, constraints within the deep learning algorithms, or the necessity to balance realism with computational efficiency. Common examples of visual artifacts in deepfakes include unnatural facial expressions, inconsistent eye blinking patterns, and discrepancies or omissions in background details.
- Deep Learning based Detection – Deep learning algorithms, like deep neural networks, can identify deepfakes by training them on a vast dataset containing both authentic and fabricated images, videos, or audio clips. These algorithms learn the characteristic patterns and anomalies associated with counterfeit content, such as unnatural facial movements, erratic eye blinking, and discrepancies between audio and visual elements. Once the deep learning algorithm completes its training, it can scrutinize new, unseen media to detect deepfakes. If the algorithm determines that a piece of media is fabricated, it can either prompt manual inspection or initiate further analysis [6].
- Use of Deepfake detection tools – Several online tools and software applications are designed to identify deepfake content. Once can use these tools to analyse content for potential manipulation.
LEGAL REGIME IN INDIA
Presently, India does not have any law to address deepfakes and AI related crime.
In certain cases, celebrities and other public figures have invoked their personality rights, which is the right to publicity and the right to privacy, to curb the misuse of their image, voice, or likeness in deepfake content. Additionally, specific provisions within the Information Technology Act, 2000 (IT Act) and its associated regulations, including the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, as amended (IT Rules), could offer assistance in addressing such concerns. Nonetheless, the legality of these recently revised regulations has also been legally contested[7].
However, in July 2023, when questioned in Parliament regarding the dissemination of deepfake content on social media platforms, India’s Ministry of Electronics and Information Technology (MeitY) contended that the existing legal framework under the IT Act adequately addressed the current challenges associated with deepfakes.
OFFENCES COMMITTED BY USING DEEPFAKE TECHNOLOGY
- Identity theft and virtual forgery: Identity theft and virtual forgery using deepfakes can lead to serious consequences for individuals and society at large. The misuse of deepfakes to steal identities, create false representations, or manipulate public opinion can damage an individual’s reputation, spread misinformation, and erode credibility.
These actions can be prosecuted under Section 66-C (punishment for identity theft) and Section 420 (cheating) of the Information Technology Act, 2000, as well as Sections 468 (forgery) and Section 499 (defamation) of the Indian Penal Code, 1860.
- Misinformation against Governments: The dissemination of misinformation using deepfakes to undermine or incite hatred against the Government poses significant societal risks. False information can cause confusion, erode public trust, and influence political outcomes.
Such acts may be prosecuted under Section 66-F (cyber terrorism) of the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022. Additionally, Sections 121 (waging war against the Government) and Section 124-A (sedition) of the Indian Penal Code, 1860, could be invoked.
- Violation of privacy, obscenity, and pornography: Deepfake technology can be exploited to create fake images or videos that harm individuals’ reputations or spread false information. It can also be used for malicious purposes such as non-consensual pornography or political propaganda. Offenses related to privacy violation, obscenity, and pornography can be prosecuted under various sections of the Information Technology Act, 2000, and the Indian Penal Code, 1860, as well as provisions of the Protection of Children from Sexual Offences Act, 2012, to safeguard the rights of women and children.
Section 67 (punishment for publishing or transmitting obscene material in electronic form), Section 67-A (punishment for publishing or transmitting of material containing sexually explicit act, etc. in electronic form), Section 67-B (punishment for publishing or transmitting of material depicting children sexually explicit act/pornography in electronic form) of the Information Technology Act, 2000 these crimes can be prosecuted. Also, Sections 292 and 294 (Punishment for sale etc. of obscene material) of the Penal Code, 1860 and Sections 13, 14 and 15 of the Protection of Children from Sexual Offences Act, 2012 (POCSO) could be invoked in this regard to protect the rights of women and children.
- Hate speech and online defamation: Deepfakes used for hate speech or online defamation can harm individuals and contribute to a toxic online environment. Prosecution for these offenses can be pursued under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022, as well as Section 153-A (promoting enmity between different groups) and Section 499 (defamation) of the Indian Penal Code, 1860.
- Practices affecting elections: The use of deepfakes in elections poses a threat to the integrity of the democratic process. False information spread through deepfakes can influence public opinion and election outcomes. Offenses related to election interference can be prosecuted under Section 66-D (punishment for cheating by personation) and Section 66-F (cyber terrorism) of the Information Technology Act, 2000, along with relevant provisions of the Representation of the People Act, 1951.
UTILIZING DEEPFAKE TECHNOLOGY RESPONSIBLY AND SAFELY
Through the adoption of ethical guidelines and rigorous security protocols, businesses can responsibly employ deepfake technology for a variety of purposes, enhancing operational efficiency, refining marketing strategies, and fostering overall business expansion. Below are some recommended measures:
- Ethical framework and guidelines:
Transparent usage policies:
- Businesses should establish clear and transparent policies governing the utilization of deepfake technology.
- Furthermore, they should openly communicate with stakeholders, including customers and employees, regarding the purpose and scope of deepfake applications.
- Consent and privacy protection:
- Organizations must obtain explicit consent before employing deepfake technology in applications involving individuals.
- Additionally, robust privacy protection measures should be implemented to ensure responsible handling of personal information, in compliance with data protection regulations (such as those outlined in the IT Act and IT Rules, as well as future legislation like the Digital Personal Data Protection Act, 2023 and the proposed Digital India Act).
- Security measures:
- Authentication protocols:
- Companies should deploy secure authentication protocols to verify the authenticity of deepfake-generated content.
- Various commercially available solutions, including digital watermarking technology and other media verification markers, can be adopted for this purpose.
- Blockchain technology:
- Integration of blockchain technology can enhance the security and authentication of deepfake content.
- Moreover, businesses should ensure the integrity and traceability of both content creation and distribution.
- Authentication protocols:
By implementing these collective security measures, businesses can mitigate the risk of malicious exploitation of deepfake technology for fraudulent purposes.
CONCLUSION
The current legislation in India concerning cyber offenses involving deepfakes falls short in addressing the issue comprehensively. The absence of specific provisions in the IT Act, 2000 related to artificial intelligence, machine learning, and deepfakes complicates effective regulation of these technologies. To better regulate offenses stemming from deepfakes, updating the IT Act, 2000 may be necessary, incorporating provisions that specifically address their use and outline penalties for misuse.
This could entail imposing harsher penalties on individuals creating or disseminating deepfakes for malicious intent and strengthening legal safeguards for individuals whose images or likenesses are used without consent.
Moreover, it’s crucial to recognize that deepfake development and usage is a global concern, likely necessitating international cooperation to regulate their use and prevent privacy violations. Meanwhile, individuals and organizations should remain vigilant about the potential risks associated with deepfakes and verify the authenticity of online information. Governments can take several approaches in the interim:
- Censorship Approach: Issuing orders to intermediaries and publishers to block public access to misinformation.
- Punitive Approach: Holding individuals or organizations accountable for originating or disseminating misinformation.
- Intermediary Regulation Approach: Imposing obligations on online intermediaries to promptly remove misinformation from their platforms, with potential liability as stipulated under Sections 69-A and 79 of the Information Technology Act, 2000.
[1] Can Deepfakes be Leveraged Responsibly? Can Deepfakes be Leveraged Responsibly? (snrlaw.in)
[2] J. Thies et al., Face2Face: Real-Time Face Capture and Reenactment of RGB Videos (CVPR 2016, Las Vegas, June 2016)
[3] Emerging Technologies and Law: Legal Status of Tackling Crimes Relating to Deepfakes in India. https://www.scconline.com/blog/post/2023/03/17/emerging-technologies-and-law-legal-status-of-tackling-crimes-relating-to-deepfakes-in-india/
[4] Supra (3)
[5] Yipin Zhou and Ser-Nam Lim, Joint Audio-Visual Deepfake Detection, Computer Vision Foundation 2022.
[6] Agarwal, S. et al., Protecting World Leaders Against Deep Fakes, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 38-45, 2019.
[7] High Court grants Centre time to respond to plea on AI, deepfake regulations.
Author: Shrikha Javvaji
