Regulating AI-Generated Speech: Safeguarding Free Expression or Enabling Surveillance?

This introduction provides an overview of the complex interplay between AI-generated content, free speech principles, and the evolving legal landscape in India. It highlights the need for a comprehensive analysis of how existing laws, particularly those related to copyright and online content regulation, apply to AI-created works, and how these laws interact with the fundamental right to freedom of speech and expression guaranteed by the Indian Constitution.

In early 2025, the Indian government introduced a set of proposed regulations seeking to govern AI-generated speech. These draft rules require companies to label AI-generated audio, maintain logs of creation and distribution, and ensure accountability mechanisms for content that could potentially mislead or harm the public. While such measures are presented as safeguards against deepfake scams, political misinformation, and cyber fraud, concerns have also been raised that such regulatory oversight could pave the way for mass surveillance, state censorship, and suppression of dissent posing a threat to the freedom of speech under Article 19(1)(a) of the Indian Constitution.

This article aims to analyses the constitutionality, necessity, and proportionality of these emerging regulations on AI-generated speech by juxtaposing the right to free expression against the State’s duty to maintain public order and protect citizens from harm.

WHAT IS AI?

AI stands for Artificial Intelligence. It refers to the ability of machines and computer systems to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. AI systems can analyze data, understand language, and even generate creative content, often without explicit programming for each specific task.

CONSTITUTIONAL FRAMEWORK: ARTICLE 19(1)(a) AND 19(2)

1. The Right to Freedom of Speech and Expression

A crucial concern is raised by the growing integration of AI-driven systems into decision-making processes: to what extent does the Indian Constitution provide for the “new age rights” required for individual security in this highly technological age? These rights which include access to information, privacy, and data protection are essential for protecting citizens from possible violations brought on by AI. Although the Indian Constitution offers a basic framework, this article contends that its current provisions are inadequate to handle the particular difficulties presented by AI, calling for am review of legal frameworks and the creation of particular AI rules. India’s fundamental rights are based on the “golden triangle” of Articles 14 (equality), 19(1)(a) (freedom of speech), and 21 (protection of life and personal liberty) of the Constitution. The application of AI, however, raises questions regarding the infringement of fundamental rights. Many AI systems exhibit algorithmic bias, which can erode equality by sustaining bias results. Freedom of expression may be violated by the use of AI for censorship and content moderation, and the right to privacy may be threatened by the massive volumes of personal data that AI systems gather and process. The “conflict of artificial intelligence with Indian Constitutionalism” is highlighted by the growing use of AI, as Talwar points out.

2. Judicial Interpretation: From Shreya Singhal to Kaushal Kishor

In Shreya Singhal v. Union of India, (2015) 5 SCC 1, the Supreme Court struck down Section 66A of the IT Act for being vague and disproportionate, reinforcing the need for clear, narrowly tailored laws that respect Article 19(1)(a). The Court held that only speech that incites imminent harm could be lawfully curtailed.

Further, in Kaushal Kishor v. State of Uttar Pradesh, (2023) 4 SCC 1, the Supreme Court acknowledged that freedom of speech also includes the right to receive information and that State action cannot disproportionately infringe on these freedoms merely on speculative harms.

Thus, the regulation of AI-generated speech must withstand the test of constitutional scrutiny it must not only serve a legitimate governmental interest but also avoid unnecessary overreach that chills speech.

THE RISE OF AI-GENERATED SPEECH AND ITS RISKS

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and the rise of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

1. Lack of AI Transparency and Explainability

AI and deep learning models can be difficult to understand, even for those who work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI, but there’s still a long way before transparent AI systems become common practice.

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey. Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

3.Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election.

TikTok, which is just one example of a social media platform that relies on AI algorithms, fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over TikTok’s ability to protect its users from misleading information.

GOVERNMENT REGULATION IN 2025: AN OVERVIEW

In response to growing concerns, the Indian government in 2025 proposed a new framework under the Information Technology Act, 2000 [(Act No. 21 of 2000)], introducing provisions specifically targeting synthetic speech and deepfake audio. The key features of the proposed rules are:

1. Mandatory Disclosure and Labelling

All AI-generated audio must include audible disclaimers or metadata tags declaring it was machine-generated. This aims to distinguish artificial speech from real voices, particularly in news, political, and public interest content.

2. Logging and Data Retention

Entities using AI for speech synthesis must maintain logs and origin data (e.g., prompts, training models, user inputs) for a minimum of 180 days, similar to intermediary obligations under the IT Rules, 2021 [Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021].

3. Content Take-Down and Pre-Screening

Government-authorized agencies may issue content takedown notices under expedited procedure if AI-generated speech is found to threaten public order, national security, or decency/morality. Pre-screening of politically sensitive content may be mandated in high-risk cases.

4. Licensing of AI Tools

Only registered AI tools from approved vendors may be used by media companies, advertisers, and public influencers. Open-source or experimental AI tools are subject to prior permission before deployment.

LEGAL AND ETHICAL CONCERNS: IS THIS A PATH TO SURVEILLANCE?

While well-intentioned, these proposed measures have been met with criticism from civil society, technologists, and constitutional scholars. Key concerns include:

1. Chilling Effect on Free Speech

Mandatory disclaimers and licensing could deter artists, journalists, or activists from using AI tools freely. Overregulation may inhibit innovation, especially for startups and creators operating on limited resources.

2. Pre-Screening – Prior Restraint

The Constitution strongly disfavours prior restraint. In Bridges v. California, 314 U.S. 252 (1941), the U.S. Supreme Court held that any form of prior censorship is presumptively unconstitutional. Indian jurisprudence, too, recognizes that speech cannot be curbed before publication except under extreme necessity.

3. Risk of Mass Surveillance

The requirement to store and report user data, including input prompts and generated outputs, may lead to continuous monitoring of online activity. These risks violating Article 21 (Right to Privacy) as affirmed in Justice. K.S. PUTTASWAMY V. UNION OF INDIA [(2017) 10 SCC 1]9 The Supreme Court ruled in this case that, in accordance with Article 21 of the Constitution, the right to privacy is a basic right. In the context of AI and data protection, the Court’s decision that people have the right to control their personal information is vital. Future debates on how AI systems should protect people’s right to privacy were paved with this ruling.

4. Vague Definitions and Overbreadth

Terms like “sensitive AI speech”, “misleading impersonation”, or “socially disruptive content” are not clearly defined, leaving room for arbitrary application by authorities—contrary to the doctrine of legality upheld in Shreya Singhal.

GLOBAL COMPARISON: HOW OTHER COUNTRIES ARE REGULATING AI SPEECH

India is not alone in its regulatory journey. Several countries have initiated legal frameworks to balance innovation and protection:

  • European Union – AI Act (2024)

The EU AI Act classifies AI systems by risk, banning high-risk models like deepfake impersonation unless clearly labelled. It mandates transparency and data governance, but avoids blanket licensing or pre-censorship.

  • 2. United States – Executive Orders and State Laws

U.S. federal policy promotes ethical AI development with voluntary guidelines, while states like California have enacted laws prohibiting AI impersonation in election contexts, especially 60 days before voting.

  • 3. China – Strict State Control

China requires pre-approval of all synthetic media and mandates state-standard watermarking, heavily restricting dissenting or politically sensitive AI content. Critics argue this facilitates censorship and curbs civil liberties.

India must avoid drifting toward models that suppress democratic expression while learning from the best practices of transparency, accountability, and minimal restriction.

9 K.S. Putt swamy v. Union of India, (2017) 10 SCC 1.

RECOMMENDATIONS AND THE WAY FORWARD

To strike the right balance between public safety and constitutional freedoms, the following measures are proposed:

  • Transparent Definitions

Clearly define “harmful AI-generated speech” and limit the scope of regulation to intentional deception, fraud, or hate speech, not artistic or experimental usage.

  •  Judicial Oversight

All takedown and data access orders should be subject to judicial or quasi-judicial review to prevent misuse and ensure adherence to due process.

  •  Sunset Clauses and Periodic Review

All AI speech regulations should include sunset provisions, with review committees evaluating the impact on civil liberties every 2–3 years.

  • Encourage Ethical AI Use Through Incentives

Offer certifications, tax rebates, or public recognition for companies adopting ethical AI practices instead of resorting to mandatory control.

  • Public Participation and Transparency Regulatory frameworks must be drafted in consultation with stakeholders—including civil society, technologists, media, and academia—to ensure legitimacy and consensus.

THE EMERGENCE OF ARTIFICIAL INTELLENGENCE IN INDIAND ITS BALANCING OPPORTUNITIES AND CHALLENGES

The development of artificial intelligence (AI) in India offers a complicated environment with many exciting prospects as well as difficult obstacles. 2AI technologies are changing the labour market and the economy as a whole as they are incorporated into more and more industries. Positively, AI is predicted to boost economic growth by increasing productivity and generating new, non-existent job roles. For example, AI solutions like chatbots and predictive analytics are revolutionizing industries like healthcare, finance, and customer service by increasing productivity and service quality. By 2025, AI may create millions of new jobs in India, particularly in the IT and data management sectors, according to a number of studies, including those published by the World Economic Forum and NASSCOM. The problem is made more difficult by the absence of a trained labour force that can adjust to these developments, which could increase the divide between those who can prosper in an AI-driven economy and those who cannot. India must thus create a thorough legislative framework that not only tackles these issues but also encourages moral AI application while defending individual liberties. India can take advantage of AI while making sure its workforce is ready through promoting cooperation between government agencies, business executives, and academic institutions.

“NEW RIGHTS, NEW RISKS: NAVIGATING PRIVACY, DATA, AND THE DIGITAL AGE”

The idea of rights has broadened in the digital era to include “new age rights” that are essential to personal freedom and welfare.4 The growing reliance on information and technology is reflected in these rights, particularly those related to privacy, data protection, and digital rights. Assessing how successfully the Indian Constitution protects citizens in a society driven by artificial intelligence requires an understanding of the difficulties these rights face. AI systems are posing a growing threat to the right to privacy, which includes a person’s sovereignty over their personal data and freedom from unauthorized access. There are serious worries about the enormous volumes of personal data that AI collects, processes, and analyses. Building confidence in digital services is crucial, but too lax data gathering methods can undermine personal freedom and open the door to abuse. It is essential to keep this balance. In order to prevent data breaches, information abuse, and unauthorized access, data protection entails putting strong protections in place for the gathering, storing, using, and sharing of personal data. Consumer trust can be increased by data protection rules, however there may be drawbacks due to their complexity. The Digital Personal Data Protection Act, 2023 [(DPDP Act, 2023)], is being implemented in India with the goal of requiring correct data for automated decision-making. However, concerns have been raised over its ability to handle all potential AI-related damages and its thorough enforcement. The proprietary nature of algorithms and the requirement for transparency in AI decision-making can occasionally clash, posing a problem that has to be resolved. There are many barriers to digital rights, which include the freedom to access, utilize, create, and engage with digital technologies. Freedom of expression is seriously threatened by the proliferation of hate speech and false information on the internet, necessitating rigorous evaluation of how to strike a balance between this right and the necessity to stop harmful content. Concerns regarding market domination and the possibility of customer behaviour manipulation are also raised by the growing concentration of data and AI models in a small number of companies. To guarantee that digital rights are adequately safeguarded, these issues must be resolved. Even though new age rights have been acknowledged, there are still many obstacles to overcome in the Indian setting. It is challenging to properly address AI-related concerns when there are no particular restrictions in place, which breeds uncertainty. Digital rights, privacy, and data protection are seriously threatened by algorithmic prejudice, data breaches, and opaque AI decision-making. India must create comprehensive legal frameworks that give the defence of fundamental rights first priority in order to overcome these issues. This entails fortifying data protection regulations, encouraging algorithmic openness, and putting in place explicit accountability frameworks for AI systems.

CONCLUSION

Both enormous opportunities and difficult obstacles arise from the incorporation of artificial intelligence (AI) into Indian society, especially when it comes to safeguarding the Constitution’s fundamental rights.11 It is crucial to evaluate critically how AI developments fit with the values of equality, privacy, and free speech as they develop. Despite having strong fundamental rights, the current legal system needs to be modified to handle the particular consequences of artificial intelligence. To avoid possible violations of individual rights, important concerns including algorithmic bias, a lack of transparency, and accountability must be given top priority. The absence of precise regulations increase uncertainty, making it difficult to hold parties accountable when AI systems do harm or perpetuate discrimination. Furthermore, in order to defend citizens’ rights in a digital world, it is critical to establish clear criteria for data protection and the moral use of AI. India must create a thorough regulatory framework that not only tackles present issues but also accounts for upcoming advancements in AI technology as it traverses this challenging landscape. Through the promotion of cooperation among interested parties, such as legislators, technology developers, and civil society organizations, India can guarantee the preservation of its constitutional principles. The ultimate objective should be to use AI as a tool for empowerment while preserving the liberties and rights that are essential to a democracy. In order to create a future where technology strengthens rather than diminishes individual liberty, this balanced approach will be essential.


Author: Anam


Leave a comment