Legal Accountability for Artificial Intelligence Driven Intellectual Property Infringements

Artificial Intelligence is not a far-off concept anymore; it is here, changing how we live, work, and create. From hospitals and classrooms to film-sets and courtrooms, it is reshaping familiar spaces. One of the places where its presence is especially complex is in the world of Intellectual Property. In this space, AI is both a brilliant tool and a legal puzzle.

AI’s ability to produce original-looking content, be it music, writing, or even visual art, has pushed us to rethink the very basics of creativity. For decades, IP law has revolved around one assumption: there’s a human at the center of every work. Someone with a mind, emotions, intent. But machines don’t tick those boxes. So when AI generates something that looks a lot like protected work, we are stuck with more questions than answers. Who do we hold responsible? The machine, people who trained it, or its users? Is it fair, or even possible, to pin liability on a developer who never saw the final result? Modern AI tools like GPT-4, DALL·E, or Midjourney don’t just automate tasks. They create. Often convincingly so. They mimic voices, art styles, and even brands with such precision that the line between inspiration and infringement starts to blur.

Different countries are approaching these issues in very different ways. Some are rewriting laws. Others are stretching old doctrines to fit new realities. Courts are improvising. And creators, they’re caught in the middle. This essay explores how India, along with a few other jurisdictions, try to make sense of this legal mess. The aim is to understand what is working, what isn’t, and where we might be heading next.

UNDERSTANDING AI AND IP

AI refers to systems designed to imitate how humans think. It picks up patterns, reasons through problems, and sometimes even learns from mistakes. That in itself is impressive. But what’s really surprising is how far it has come in the creative space. AI isn’t just assisting people with their work anymore. It is now making things on its own. Music tracks, short stories, digital art, architectural sketches, you name it. Almost nothing seems out of reach. That brings us face-to-face with a rather uncomfortable dilemma: how do our laws treat works that weren’t created by a person?

The answer? Not too well. The systems we rely on for IP protection were built on a very human assumption: that creativity comes from people. That belief runs deep, quietly shaping every definition and legal framework. So when a machine shows up and starts creating, the law doesn’t really know where to put it. It’s a bit like trying to fit a square peg in a round hole.

Legal Foundations of IP in India

The Copyright Act, 1957 defines an “author” in Section 2(d) as a natural person[1], someone living, breathing, and likely holding a paintbrush or pen. No mention of machines. The Patents Act, 1970, follows the same line. Under Section 2(y), an “inventor” must be a person.[2] There’s no room yet to write in an AI as the owner of a breakthrough idea. Similarly, under the Trade Marks Act, 1999, Section 2(1)(zb) talks about goodwill, identity, and consumer association[3], things we typically attach to human names or companies, not algorithms.

So, if a chatbot pens a poem or an AI like Jukebox strings together a melody, who owns it? No clear answers. And maybe worse, no legal provisions that even try to answer.

Categorizing AI’s Interaction with IP

AI wears many hats in the creative world. It shifts roles.

First, there is AI as a tool. Think of it as an assistant. Something like Adobe’s Sensei that nudges a designer with color suggestions or layout help. The creator here is still human, plain and simple. Then comes AI as a collaborator. This is murkier. Both the human and the machine bring something real to the table. So, who gets the legal credit? That’s where things begin to feel uncertain.

Finally, we have AI as an autonomous creator. This is where it gets thorny. The AI creates with little or no help, writes a novel, composes a track, maybe even paints a digital portrait. In these cases, current laws fall short. There is no clean way to say who owns what, or who should be blamed if something goes wrong.[4]

Global Academic and Policy Perspectives

Across the globe, legal scholars and policymakers are grappling with the same questions. The World Intellectual Property Organization (WIPO), in its 2020 report[5], flagged this issue clearly. Some believe AI-generated work deserves protection, while others are still on the fence. And even where there is agreement, there’s little clarity on who gets the rights.

The UK has tried to address it, albeit cautiously. Its Copyright, Designs, and Patents Act of 1988 includes a clause for “computer-generated works.”[6] But there’s a catch: a human must still direct the creative process. India, meanwhile, is paying attention. The 2018 Draft National Strategy on AI recognizes that AI has a place in the creative sector[7]. That said, the document stops short of offering any serious legal reforms. It’s more a nod than a nudge.

Judicial Recognition in India

So far, Indian courts have not tackled AI-generated content head-on. But some past judgments help sketch the legal mood. In EBC v. D.B. Modak[8], the Supreme Court emphasized that a work must show a “modicum of creativity”, and that creativity, crucially, has to come from a human mind. More recently, in Navigators Logistics Ltd. v. Kashif Qureshi[9], the Delhi High Court echoed that view. The originality in any protected work must stem from human effort and thought.

The courts are still focused on people. AI remains a legal outsider.

CASES IN POINT

Getty Images v. Stability AI (UK, 2023)

Getty Images sued Stability AI, the creators of Stable Diffusion, for using millions of images to train their model without asking for permission. Getty argued that the AI didn’t just learn from the images. It replicated their style, potentially hurting the rights of photographers and visual artists. The core dispute is over unauthorized copying and whether the AI’s outputs, which resemble existing works, should be treated as infringements.[10]

Thaler v. Perlmutter (U.S. Copyright Office, 2022)

Here, Stephen Thaler applied for copyright protection on behalf of his AI, “Creativity Machine,” which had independently created a piece of art. The U.S. Copyright Office denied the application, stating that only a human being can be an author.[11]

Together, these cases highlight the cracks forming in our legal systems.

LEGAL ACCOUNTABILITY FRAMEWORKS

Indian Legal Framework: Present and Absent

As we have seen, India’s existing IP laws were drafted for a world where only humans or companies, could own, create, or infringe rights. They don’t quite know what to do with non-human creators.

Take the Copyright Act, 1957. Section 51[12] describes infringement in terms of a “person” doing something, clearly implying a human or juristic entity. AI, by that definition, simply doesn’t count. Under Section 6(1) of the Patents Act, 1970[13], only natural persons can be named as inventors. That shuts the door on patent protection for AI-generated inventions, and also on assigning liability to the AI itself for any infringement. The IT Act, 2000, though not an IP law per se, offers some interesting overlap. Section 79[14] gives a safe harbour to intermediaries, but it is unclear if developers of generative AI tools, who don’t host content but enable its creation, get the same benefit. In MySpace Inc. v. Super Cassettes Industries Ltd., the Delhi High Court did suggest that failing to act on takedown notices could attract liability, even if you didn’t directly infringe.[15] That could someday be relevant to AI developers, too.

Regional Glimpses: South Asian Legal Preparedness

In Pakistan, the Copyright Ordinance of 1962[16] continues to govern IP rights, but it relies heavily on human agency. There’s no legal provision for computer-generated works, and no judicial precedent yet to clarify liability when AI infringes. While the Pakistan National AI Policy 2023[17] identifies AI ethics as a priority, it falls short of addressing ownership or infringement concerns. In Bangladesh Copyright Act, 2000[18], and the Patents and Designs Act, 1911[19] remain silent on non-human authorship. Legal commentary has begun to emerge, but there are no formal discussions around AI accountability in IP.

These examples point to a regional challenge. Globally, legal systems are beginning to respond to the rise of AI. They differ in how they frame liability, but there’s one shared understanding: someone, a person or an entity, must be held accountable.

United States

In the U.S., courts have firmly drawn the line. As we saw in a case earlier, the U.S. Copyright Office held that AI-created works don’t qualify for copyright. Only humans can claim authorship. When it comes to infringement, the courts lean on contributory or vicarious liability. If a developer enables or ignores clear risks of infringement, they can be held accountable. [20]

United Kingdom

The UK takes a slightly different tack. Section 9(3) of the Copyright, Designs and Patents Act, 1988 says that the author of a computer-generated work is “the person by whom arrangements necessary for the creation… are undertaken.”[21] That offers some accountability, but isn’t foolproof. In shared environments, think open-source AI projects, it gets harder to say who really made “necessary arrangements.”

European Union

The EU’s legal framework is more layered. The AI Act[22], passed in 2024, introduces a risk-based classification system. High-risk AI tools carry heavy obligations for developers and deployers. Meanwhile, the Directive on Copyright in the Digital Single Market (2019)[23] allows text and data mining for research, unless a rights holder opts out. In practice, this leans toward holding developers liable. The system is complex, but it tries to balance innovation with oversight.

Emerging Theories of Legal Accountability

A few new theories are emerging, though none have been universally adopted.

Attribution Theory: Assigns liability to the person who initiated or materially influenced the AI’s act, usually the user or developer.

Strict Developer Liability: Inspired by product liability law, this idea puts the burden on developers to anticipate and prevent misuse

Electronic Personhood: A 2017 proposal by the EU Parliament suggested giving AI a kind of legal status, like corporations have. But this idea hit stiff resistance and hasn’t gone anywhere.[24]

Right now, India and much of South Asia lack a clear roadmap. While we can look to the U.S., EU, and others for guidance, the core idea is clear: AI can’t be punished or sued. The responsibility must fall on the people who build, operate, and benefit from it. For India and its neighbours, the task ahead is adapting their laws to the complexities of machine-driven creativity without losing sight of fairness, growth, and accountability.

CHALLENGES IN FIXING LEGAL RESPONSIBILITY

Assigning liability for AI-generated IP infringements isn’t merely procedural; it questions core legal tenets of authorship, agency, and intention. AI lacks intent, yet produces outputs that infringe. This creates a legal vacuum that risks stifling innovation or enabling unchecked misuse.

Lack of Clear Human Agency

AI outputs are data-driven and often produced with minimal human input at the final stage. This weakens traditional liability models based on intent or negligence. An AI-generated melody closely resembling a copyrighted song may lack any human copying, yet still constitute infringement. Indian IP law, rooted in human fault, struggles to handle such autonomous violations.

Attribution Problems: Who is Responsible?

The AI ecosystem involves multiple actors: Developers (write code), Data providers (supply training sets), Users (input prompts), and Platforms (host tools).

Without legal clarity, liability becomes diffused, enabling blame-shifting, strategic, with litigants targeting the most solvent or visible party

In India, no statute or precedent guides whether the prompt engineer, host, or developer is liable.[25]

Technical Opacity (“Black Box” Problem)

AI systems lack transparency, making it difficult to trace the origin of infringing content or establish clear causation and intent, both crucial in legal analysis. Auditing these systems often raises trade secret and privacy concerns, as it may require access to proprietary training data. Proving that an AI “copied” a painting is nearly impossible without disclosing its confidential dataset.[26]

Jurisdictional Complexity

AI tools defy borders. A model developed in the U.S., trained on EU data, hosted in India, used by a creator in Delhi, and published via a Singapore-based platform. Such global chains of infringement expose the limitations of India’s territorially bound IP laws, which are ill-equipped to address the transnational nature of AI-generated content.

AI Advancements Outpacing Law

AI evolves faster than the law can keep up, it mimics artistic styles like Van Gogh filters, creates deepfakes that blur IP and privacy lines, and autonomously compiles music, code, and video. Meanwhile, legal institutions remain reactive, resulting in a persistent gap between infringement and remedy.

We face a systemic mismatch: traditional IP laws can’t address autonomous, opaque, or cross-border infringements. Without tailored accountability mechanisms, enforcement falters and both creators and innovators suffer. Piecemeal fixes won’t suffice; a coherent, future-ready legal framework is imperative.

CONCLUSION

The urgency of building a strong legal framework for AI accountability in IP law is hard to ignore. India, as it climbs the ladder to becoming a global tech and innovation hub, faces a tough balancing act. It must protect the rights of creators while not clipping the wings of innovation. The time to act is now, through thoughtful legal amendments, judicial clarity, and focused policy steps.

Enactment of Specific Statutory Amendments is required. IP laws weren’t made with machines in mind.The Indian laws still assume that only humans create. This gap needs fixing. Amendments should define “AI-generated works” clearly. If AI creates something, the person controlling it should be deemed the author. Liability could be presumed against developers or users unless they prove otherwise. The law should spell out duties for AI service providers, due diligence, prompt takedowns, maybe even disclosures about the AI’s training data. Safe harbor protections should depend on how transparent and fair the design process was. Policy and institutional reforms would be of help. India should set up a National Commission on AI and IP Law. Bring in technologists, lawyers, economists, and creators. Let them chart a balanced path, a new legal framework that works for both protection and progress.

There is also a need to develop judicial guidelines and doctrinal innovations. Let’s be clear, AI is a tool, not a person.Courts should say this loudly and clearly. Responsibility belongs to whoever controls or uses the tool. Doctrines like strict liability or negligent design can help courts fill the gap. But each case is going to be different.Judges need a checklist: How much human input was involved? Did the person know what the AI might do? Were there filters or audit logs? These things matter. Also, courts shouldn’t wait for the perfect case.They could take up suo motu PILs or listen to amicus briefs on AI and IP. These tools can help shape how we think about AI and the law, before problems get too big to manage.

International cooperation and soft law instruments are useful. India should lead and listen in forums like WIPO, as global rules on AI authorship and licensing norms are still forming. And soft law isn’t soft on impact. Model frameworks like the OECD’s AI principles[27] are gaining traction. India could adopt these into its domestic context, gradually creating enforceable norms. Legal-tech literacy and stakeholder education are to be promoted. People running the system need to understand the system.Judges, lawyers, and policymakers should receive hands-on training in AI. Not deep coding, but the basics: how AI works, where IP issues arise, and how to track infringement. And creators need more than just laws, they need tools. Even simple legal helplines could go a long way for someone facing AI-led exploitation.

 

Thus, to conclude, we can say that Legal accountability for AI in IP isn’t just a legal project, it is a societal one.India needs a layered strategy. Fix the statutes, guide the courts, strengthen institutions, and stay active in global dialogues. Because at the end of the day, protecting creators shouldn’t mean slowing down progress. And progress shouldn’t mean leaving justice behind.

A legal system that adapts with technology, not after it, will serve us all better.


[1] Copyright Act, 1957, § 2(d) (India).

[2] Patents Act, 1970, § 2(y) (India).

[3] Trade Marks Act, 1999, § 2(1)(zb) (India).

[4] United States Copyright Office, Copyright and AI, Part 2: Copyrightability, (Issued on January 29, 2025) (US).

[5] WIPO, WIPO Intellectual Property Handbook: Policy, Law and Use, WIPO Publication No. 941 (Issued 2020) (INT).

[6] Copyright, Designs and Patents Act, 1988, §9(3) (UK).

[7] NITI Aayog, National Strategy for AI (Issued on June 4, 2018) (India).

[8] Eastern Book Company v D.B. Modak, (2008) 1 SCC 1 (India).

[9] Navigators Logistics Ltd v. Kashif Qureshi & Ors., 2018 SCC OnLine Del 11321 (India).

[10] Getty Images & Ors v Stability AI, [2025] EWHC 38 (Ch) (UK).

[11]Thaler v Perlmutter, No. 1:22-cv-01564-BAH, (D.D.C.) (Aug. 18, 2023) (US).

[12] Copyright Act, 1957, § 51 (India).

[13] Patents Act, 1957, § 6(1) (India).

[14] Information Technology Act, 1957, § 79 (India).

[15] My Space Inc v. Super Cassettes Industries Ltd (2017) 236 DLT 478 (DB).

[16] Copyright Ordinance, 1962 (PK).

[17] Ministry of Information Technology & Telecommunication (Pakistan), National AI Policy – Consultation Draft V1 (Issued on 2022) (PK).

[18] Copyright Act, 2000 (BD).

[19] Patents and Designs Act, 1911 (BD).

[20] Thaler v Perlmutter, No. 1:22-cv-01564-BAH, (D.D.C.) (Aug. 18, 2023) (US).

[21] Copyright, Designs and Patents Act, 1988, § 9(3) (UK).

[22] AI Act, 2024(EU).

[23] EU, Directive on Copyright and Related Rights in the Digital Single Market, Directive (EU).

[24] European Parliament, Study on the Impact of AI on Copyright and Related Rights, IPOL/STU/2020/621926 (Issued November 2020) (EU).

[25] Institute of Company Secretaries of India, Elective Paper on AI, Data Analytics and Cyber Security (Issued 2022) (India).

[26] Xi, R., A Systems Approach to Shedding Sunlight on AI Black Boxes, 53 Hofstra L. Rev. 3 (Forthcoming 2025) (Issued August 26, 2024) (US).

[27] Organisation for Economic Co-operation and Development, AI Principles, (2024) (OECD).


Author: Sanjana Singh


Leave a comment