AI and Data Driven Breaches: Legal Challenges and Liability Frameworks

I. INTRODUCTION

Artificial Intelligence (AI) is revolutionizing our world at an unprecedented pace, reshaping daily life, work, and society as a whole. The emergence of OpenAI has accelerated advancements in AI, impacting industries, governments, and various sectors. While AI once operated beneath the radar of public awareness, the rise of generative AI systems powered by Large Language Models (LLMs) has enabled machines to outperform humans in numerous tasks. These systems are now widely adopted to enhance efficiency and productivity. 

However, the rapid adoption of AI has also introduced significant legal uncertainties, particularly in the realm of information privacy. AI challenges traditional legal frameworks by redefining concepts like “personal information.”

As AI continues to evolve, its widespread use raises a host of legal, ethical, and societal concerns. These include issues related to privacy, job displacement due to automation, the rise of AI-driven cyberattacks, and even existential risks. Without robust regulations and clear legal accountability for data controllers and corporations, AI could be misused in ways that pose significant threats to humanity. Recognizing these risks, intergovernmental organizations, national governments, and regional bodies have begun taking steps toward AI governance. 

For instance, the European Union has implemented the General Data Protection Regulation (GDPR), while the United States relies on sector-specific laws, such as those enforced by the Federal Trade Commission (FTC), to monitor AI for consumer protection and unfair trade practices. Countries like China and Japan have adopted a balanced approach, promoting innovation while maintaining strict oversight. Similarly, the UK and India have embraced flexible regulatory frameworks to govern AI. 

Despite these efforts, significant gaps persist in existing legislative frameworks, leading to regulatory loopholes, weak enforcement, and unresolved ethical dilemmas. This article explores the concept of AI as a “legal entity,” examining the legal accountability of data controllers and their governance under global cybersecurity laws.

By proposing reforms and drawing insights from global frameworks like the GDPR and the EU AI Act, this article advocates for adaptive, future-proof liability models that strike a balance between innovation and accountability. It aims to provide actionable recommendations for policymakers, legal experts, and technologists in India and beyond. 

II. CAN AI BE CONSIDERED AS A LEGAL ENTITY?

If we consider the historical transformation of AI, it has undergone a dramatic transformation since its conceptualization in the mid-20th century. Alan Turing’s seminal 1950 paper Computing Machinery and Intelligence had posed the question, “ Can machines think?”. He introduced the Turing Test, which measures a machine’s capability to imitate human conversations convincingly[1] .AI models were mostly focused on simple rule-based programs, such as checkers-playing software, but lacked true learning ability. The rise of Machine learning in the 1990’s saw IBM’s Deep Blue[2] defeating a chess world champion in 1997 fuelled by big data and cloud computing. The 21st century saw the development of generative AI such as GPT-4 LaMDA, Sydney  have led to AI systems with advanced problem solving reasoning, and even emergent capabilities beyond human programming. As AI’s decision making becomes more autonomous, legal and ethical questions arise. The “black box” problem makes AI’s reasoning opaque, complicating accountability for its actions and hence, here comes the responsibility of courts and policymakers to decide the legal accountability of AI[3].

The ascribing of a legal status to AI has always been a long standing debate, as it is a complex issue touching upon legal, ethical, and technological dimensions.

Legal entities generally fall into two categories; Natural persons (human beings with inherent rights and responsibilities) and Legal persons (non-human entities such as corporations, which are granted legal rights and obligations by law).

The debate on the “legal personhood” debate on AI revolves around whether AI can be granted a status similar to that of corporations/ companies. Companies enjoy the status of a separate legal personality. It possesses legal rights such as entering contracts, owning property , suing under its own name or being sued. Unlike Corporations, which have human owners and managers responsible for decisions, AI functions through algorithms and self-learning mechanisms, making it difficult to assign legal liability for it’s actions. If AI were granted legal personhood, it could lead to a situation where developers, manufacturers, and users could evade responsibility for harm caused by AI. But there stands several challenges, when AI systems operate through complex, cross-border agreements involving multiple stakeholders ( developers, deployers, users), this makes it difficult to pinpoint liability for AI related threats[4]. Generative AI using LLMs have self-learning capabilities, it can generate unpredictable outcomes, making it challenging to foresee and assign responsibility for any damage it causes. EU laws on regulation of AI imposes strict liability on developers . US tort law would also hold developers liable, but negligence must be proven, which means AI is treated as a product rather than a person or agent[5].

Granting AI legal personhood remains a theoretical debate for now, but as AI systems evolve, legal frameworks need to be adapted to balance innovation, rights, and responsibilities.

III.ACCOUNTABILITY UNDER EXISTING CYBERSECURITY LAWS

Worldwide, a significant amount of focus is being given on “regulation of AI”, as it is presenting both opportunities and risks in cybersecurity. The United States adopted a sector- specific approach, which emphasizes regulations tailored to particular industries. The EU  implemented a more unified, ethics- driven framework, like GDPR ( General Data Protection Regulation (GDPR).  USA favours a laissez faire approach , whereas China implements a

government led strategy that balances between “regulatory oversight” and market driven innovation”. Countries like India, Bangladesh, Pakistan are still in the initial stages of formulating  a  comprehensive AI strategy , though their legal landscapes are evolving rapidly.

(a) European Union (EU)

The EU has made significant progress in achieving a uniform regulatory framework for AI, across EU. One of the first attempts was the European Commission’s White Paper on AI(2020), that outlined the EU’s vision for a trustworthy and human-centric AI framework.

The EU’s General Data Protection Regulation (GDPR), which came into effect in 2018, provides a framework imposing strict compliance for data collection, usage, and storage in AI technologies handling personal data. It is the backbone of EU’s approach to data privacy in the digital sphere. Under GDPR, individuals have the “right to an explanation”, allowing them to request a clarity on how automated decision that’ll affect them, are made [6].

The GDPR does not explicitly mention AI, but many of its provisions apply to AI-related data processing. There exists a tension between traditional data protection principles – such as purpose limitation, data minimization, and restrictions on automated decisions and AI’s ability to collect personal data and analyse such vast data for evolving purposes. However, these principles can be interpreted flexibly, in ways that support the responsible use of AI and big data. As personal data is used by AI systems, to make inferences, a study by European Parliament Research Services[7]  acknowledged that AI transforms personal data into valuable commodities , enabling automated decision-making that might be cheaper, more accurate, and impartial than human decisions. However, algorithmic decisions can be mistaken and discriminatory and lead to pervasive surveillance, persistent evaluation and potential manipulation, further raising concerns about the rise of ‘surveillance capitalism’.

Key principles of GDPR[8]that significantly impacts AI data processing:

  • Fairness & Transparency (Article 5(1)(a)): AI systems must ensure transparency despite their complexity, providing clear information on processing and profiling. Fairness requires preventing bias and discrimination in AI decision-making.
  • Purpose Limitation (Article 5(1)(b)): AI must not repurpose personal data beyond the originally specified use unless compatible.
  • Data Minimisation (Article 5(1)(c)): AI’s reliance on large datasets conflicts with this principle, but techniques like pseudonymisation help mitigate risks.
  • Accuracy (Article 5(1)(d)): AI decisions depend on accurate input data; errors can harm data subjects.
  • Storage Limitation (Article 5(1)(e)): AI systems must limit data retention unless for research or archiving with safeguards.

Other key GDPR aspects relevant to AI:

  • Consent (Article 6(1)(a)): Obtaining valid consent for AI processing is challenging.
  • Legal Bases (Article 6): AI processing must have a lawful basis, such as legitimate interest.
  • Special Category Data (Article 9): AI’s ability to infer sensitive data raises concerns.
  • Data Subjects’ Rights (Articles 12-23): AI must allow individuals to access, erase, and object to automated decision-making.
  • Privacy by Design (Article 25): AI systems should integrate data protection from inception.
  • DPIA (Articles 35-36): AI applications with high-risk processing require impact assessments.

(b) The United States

The United States follows a sectoral approach to AI governance, which means that AI regulations are developed and enforced within specific industries, rather than through a single, overarching AI law.

This approach allows different federal agencies to regulate AI. However, it lacks a comprehensive legal framework like that of EU’s. Governance of AI falls under the ambit of several federal departments like the FTC( Federal Trade Commission), that monitors business AI activities to ensure that they are legal. Similarly, FDA monitors the deployment of AI in healthcare, mostly the approval of machine-learning algorithms used in medical devices[9].

President Biden’s Executive Order 14110 highlights the need for secure AI development, focusing on data protection and responsible use. The Federal Trade Commission (FTC) has also issued warnings about AI’s potential for privacy violations, particularly in biometric data handling and consumer deception. In California, Governor Newsom’s executive order mandates guidelines for AI procurement and risk mitigation. The California Privacy Protection Agency (CPPA) has proposed strict regulations requiring businesses to conduct risk assessments and ensure compliance with the California Consumer Privacy Act (CCPA)[10]. Developers and deployers of GenAI must consider legal obligations such as obtaining consent for data collection, limiting data retention, and ensuring transparency in AI-generated content. Companies must also implement robust security measures to prevent unauthorized data use and comply with consumer rights regarding personal data. Privacy compliance in AI is an ongoing challenge, and organizations must stay ahead of regulatory developments to avoid legal risks.

(c) China

China’s approach to AI legislation uniquely blends strict government oversight with market-driven experimentation.

The ‘New Generation Artificial Intelligence Development Plan’ (NGAIDP), introduced in 2017, serves as the foundation for AI governance, aiming to position China as a global AI leader by 2030.

Unlike the EU’s human rights-centered framework or the US’s sectoral approach, China integrates AI regulation with its national strategic goals. The Cyberspace Administration of China (CAC) oversees AI regulations, covering technical, ethical, and safety concerns. While China lacks a unified national AI law, certain municipalities have enacted their own guidelines. The 2019 Beijing AI Principles, though non-binding, emphasize ethical aspects like transparency and fairness.

China’s AI governance also intersects with data protection laws, such as the Personal Information Protection Law (PIPL) and the Data Security Law (DSL). However, critics argue that these laws primarily facilitate state surveillance rather than protect individual privacy. AI-driven technologies, including facial recognition and the social credit system, further enhance government oversight, raising concerns about personal freedoms.

Ethical AI discussions in China remain limited compared to the West. However, the growing emphasis on principles like those in the Beijing AI guidelines reflects a rising focus on responsible AI use. Rooted in Confucian values, China’s ethical stance prioritizes social harmony and collective well-being over individual privacy, influencing both public and governmental perspectives on AI governance [11].

(d) India

India is just starting to draft a comprehensive legal framework for AI. The current legal framework mostly focuses on ‘fintech’ ( financial technology) sector that is driven by the integration of AI . This has significantly led to capabilities in personalisation, fraud detection and operational efficiency, thereby transforming the financial sector. However, this sector’s growing dependence on vast amounts of personal data has raised issues around data security, consumer autonomy, and the risks of misuse.

India’s approach to addressing data privacy concerns in the fintech sector is the enactment of the Digital Personal Data Protection Act (DPDPA) in 2023 [12]. This legislation aims to balance the protection of personal data with the promotion of technological innovation.

At its core, the Act mandates that fintech firms obtain explicit user consent before processing personal data, enhancing transparency and strengthening individual control over personal information.By aligning with global standards such as the European Union’s General Data Protection Regulation (GDPR), the DPDPA establishes itself as a comprehensive framework for data privacy governance.

The Act grants individuals key rights, including access to, correction of, and deletion of their data, empowering them with greater control over their personal information. For fintech firms, it imposes strict security obligations to safeguard data integrity and protection. Additionally, the introduction of grievance redress mechanisms provides a structured process for resolving data misuse disputes, fostering consumer trust and enabling an ecosystem where innovation and compliance coexist.

The DPDPA enforces compliance through stringent penalties, with fines reaching up to INR 2.5 billion (approximately US$30 million) for violations [13].

The IT Act, 2000, supplemented by the IT (Reasonable Security Practices and Procedures) Rules, 2011, imposes obligations on organizations handling sensitive personal data. Section 43A requires entities to implement reasonable security measures, and Section 72A criminalizes unauthorized data disclosure[14]. However, these provisions were drafted before the rise of AI-driven automation and cyber threats, making them inadequate for addressing AI-specific breaches. While companies using AI-based decision-making must comply with security standards, there are no explicit regulations ensuring algorithmic fairness, bias mitigation, or accountability for AI-generated privacy violations. Policy initiatives like NITI Aayog’s AI Strategy (2018) [15]and the Draft National Data Governance Framework Policy (2022) promote AI regulation but lack legal enforcement.

The Justice K.S. Puttaswamy case (2017)[16]established privacy as a fundamental right, influencing data protection laws, though AI liability remains unclear.

Unlike the EU AI Act, India has no dedicated AI liability framework or strict cross-border data protection rules. To address these gaps, India must introduce AI-specific liability provisions, mandate algorithmic transparency, and strengthen cybersecurity laws to keep pace with global AI regulations.

IV.  AI- DRIVEN DATA BREACH CASE STUDIES

(a)  OpenAI ChatGPT Data Exposure (2023)

In March 2023, OpenAI confirmed a security flaw in ChatGPT that exposed user chat histories and payment details. The issue stemmed from a bug in the Redis database, which allowed some users to view the titles of other users’ chat histories and even access portions of personal data, including credit card information. This raised serious concerns about the security of AI-driven chat models and the risks associated with storing large volumes of conversational data. The Italian Data Protection Authority (DPA) temporarily banned ChatGPT, citing violations of the General Data Protection Regulation (GDPR), particularly regarding transparency and lawful data processing. Following regulatory pressure, OpenAI implemented stronger security measures and updated its privacy policies to comply with European standards [17].

(b) Cambridge Analytica & Facebook Data Harvesting (2018)

The Cambridge Analytica scandal (2018) remains one of the most infamous cases of AI-driven data misuse.

The political consulting firm harvested personal data from 87 million Facebook users without their explicit consent, leveraging AI-driven psychographic profiling to manipulate voter behaviour during the 2016 U.S. presidential election and the UK Brexit referendum.

AI algorithms analysed users’ online behaviours to predict and influence political opinions, demonstrating how AI-powered data analytics could be weaponized for electoral manipulation. As a result, Facebook was fined $5 billion by the U.S. Federal Trade Commission (FTC) in one of the largest penalties for privacy violations. The case spurred global regulatory changes, including stricter enforcement of GDPR and the California Consumer Privacy Act (CCPA)[18]  .

(c) Clearview AI Facial Recognition Data Breach (2020–Present)

A particularly alarming AI-related data breach involved Clearview AI, a facial recognition company that built a massive database by scraping billions of images from social media to train its AI-powered facial recognition surveillance tool. The company marketed its technology to law enforcement agencies and private entities, raising serious ethical and legal concerns. However, in 2020, a major data breach exposed internal customer lists, facial recognition models, and law enforcement usage records, revealing how AI-powered surveillance systems could be misused. The Italian Data Protection Authority fined Clearview AI €20 million under GDPR, and additional lawsuits were filed in the U.S. under the Illinois Biometric Information Privacy Act (BIPA). This case underscored the urgent need for biometric data protection laws and stricter regulatory oversight of AI-driven surveillance technologies [19].

These case studies demonstrate how AI-driven data breaches pose significant legal, ethical, and security risks. These incidents highlight gaps in AI security frameworks and the need for stronger global AI regulations. As AI systems continue to evolve, governments and regulators worldwide must develop adaptive legal frameworks to hold companies accountable and protect user privacy.

V. CONCLUSION

AI-driven data breaches present significant challenges that existing legal frameworks struggle to address. While regulations like the GDPR in the EU and sector-specific laws in the US provide a foundation for AI governance, gaps remain in enforcement, cross-border accountability, and the rapid evolution of AI capabilities. Many legal systems, including India’s, lack specific provisions for AI liability, algorithmic transparency, and ethical data processing, making it difficult to hold developers and corporations accountable for AI-related harm. Furthermore, the absence of standardized global regulations creates loopholes that corporations can exploit, leading to inconsistent enforcement and weak consumer protection.

To build a more effective legal framework, governments must adopt adaptive, AI-specific laws that evolve alongside technological advancements. This includes mandating algorithmic transparency, requiring AI impact assessments before deployment, and implementing strict liability models for AI developers and deployers. Strengthening cross-border cooperation is also crucial to address jurisdictional challenges, ensuring companies cannot evade responsibility by operating in regulatory havens. Additionally, interdisciplinary collaboration between policymakers, technologists, and legal experts is essential to create balanced regulations that foster innovation while protecting fundamental rights. By bridging these gaps, nations can establish robust AI governance frameworks that enhance cybersecurity, safeguard privacy, and ensure accountability in an increasingly AI-driven world.


[1] Alan Turing, Computing Machinery and Intelligence, Mind, Vol. 59, No. 236 (1950), pp. 433-460

[2] . IBM, Deep Blue: Overview, IBM Research, available at https://www.ibm.com.

[3] Forrest, K. B. (2024). The Ethics and Challenges of Legal Personhood for AI. The Yale Law Journal Forum, 133, 1175–1190.

[4] Anulekha Nandi, Artificial intelligence and personhood: Interlay of agency and liability, OBSERVER RESEARCH FOUNDATION (Dec. 7, 2023), https://www.orfonline.org/expert-speak/artificial-intelligence-and-personhood-interplay-of-agency-and-liability.

[5] Bronwyn Howell, Regulating Artificial Intelligence in a World of Uncertainty (Am. Enter. Inst. 2024), https://www.jstor.org/stable/resrep64560.

[6] . Miazi, Muhammad Abu Nayem. “Interplay of Legal Frameworks and Artificial Intelligence (AI): A Global Perspective.” Law and Policy Review, vol. 2, no. 2, 2023, pp. 1–25, https://doi.org/10.32350/lpr.22.01.

[7] EPRS European Parliamentary Research Service, The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence, PE 641.530 (2020).

[8] Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), 2016 O.J. (L 119) 1 [hereinafter GDPR].

[9] West, D. M. (2023). The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment. Brookings Institution. https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

[10] Tam, Jonathan. Privacy Law Issues Associated with Developing and Deploying Generative AI Tools. California Lawyers Association, Privacy Law Section Journal, Vol. 1, 2024.

[11] Miazi, Muhammad Abu Nayem. “Interplay of Legal Frameworks and Artificial Intelligence (AI): A Global Perspective.” Law and Policy Review, vol. 2, no. 2, 2023, pp. 1–25, https://doi.org/10.32350/lpr.22.01.

[12] Digital Personal Data Protection Act, No. 32, Acts of Parliament, 2023 (India), available at https://www.meity.gov.in/.

[13] Sauradeep Bag, Digital Personal Data Protection Act: Shaping India’s AI-driven fintech sector, OBSERVER RESEARCH FOUNDATION (Dec. 23, 2024), https://www.orfonline.org/.

[14] The Information Technology Act, No. 21, Acts of Parliament, 2000 (India), §§ 43A, 72A, available at https://www.indiacode.nic.in/.

[15] NITI Aayog, National Strategy on Artificial Intelligence (2018), available at https://www.niti.gov.in/.

[16] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).

[17] Bitdefender (2024),Italian Data Protection Authority Fines OpenAI €15 Million for GDPR Violation, https://www.bitdefender.com.

[18] Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach, THE GUARDIAN (Mar. 17, 2018, 10:03 PM GMT), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

[19] Observer Research Found. Clearview AI’s Biometric Data Breach and Global Implications, (2023), https://www.orfonline.org.


Author: Utsa Bandyopadhyay


Leave a comment