
Imagine the reaction of a small business owner who received the binding arbitral award of an ODR platform. Her claim was rejected by the proprietary AI algorithm with a vague reference to pattern analysis and a confidence score. On requesting further explanations, she learns that the algorithm is protected as a trade secret. This apparent situation demonstrates the primary dilemma of AI-ODR, namely, is the high efficiency of AI compatible with the fundamental right of a just trial and the legal right to privacy? ODR has increasingly become an important instrument of justice which reduces the cost and complexity of legal processes. However, the lack of transparency in AI-ODR poses significant legal problems that can potentially create a two-tiered justice system in the digital era.The interaction between human rights and artificial intelligence in conducting online dispute resolution (ODR) processes is now under serious threat. It is proven that the use of artificial intelligence in ODR platforms without proper transparency and control can violate the basic legal principle of fairness as established by Article 21 of the Indian Constitution and the provisions of the new Digital Personal Data Protection Act of 2023.The absence of a proper framework of accountability and effective safeguards may lead to the digital justice system based on artificial intelligence to perpetuate the same patterns of systemic exclusion that it seeks to eliminate. Four sections comprise this analysis. It first establishes the current legal framework that validates ODR in India. The next section discusses the constitutional flaws which occur because of algorithmic opacity under Article 21. It then examines AI-ODR under particular sections of DPDPA, 2023. It eventually provides a principles-based governance framework that would unite technological development with the absolute requirements of justice and privacy.
THE EXISTING LEGAL SYSTEM OF ODR IN INDIA
Online dispute resolution (ODR) is legally recognized in India since it is backed up by numerous laws and court decisions. This base allows us to examine the way ODR is evolving with artificial intelligence technology. 1)Statutory Support for Electronic Dispute Resolution: The Indian judicial system has adopted a gradual process to implement digital operations. The Information Technology Act, 2000 established electronic documents and digital signatures as lawful (Sections 4 & 5) which enabled online contracts and electronic document submissions. The Arbitration and Conciliation Act, 1996, accepted electronic arbitration agreements based on Section 7(4)(b), which allows agreements to be electronic communications that are available for future reference that was affirmed in Shakti Bhog Foods Ltd. v. Kola Shipping Ltd. The court in the recent case of Belvedere Resources DMCC v. OCL Iron and Steel Ltd. & Ors.,established that WhatsApp messages and emails can create an arbitration agreement based on the mutual intent of parties rather than official documents. The Bharatiya Nagarik Suraksha Sanhita, 2023 (BNSS), section 530 allows courts to conduct their business electronically. It informs courts that they can accept evidence recorded on audio-video means. The statute law has put in place electronic dispute resolution in various other laws. The Consumer Protection Act, 2019, has been one of the laws that have supported ODR. Since its inception, E-Daakhil has been solving complaints online. State authorities have supported digital solutions actively. Disputes resolution with technology has a robust base according to the current law, but it is intricate. The recently introduced procedural codes enhance this legitimacy. 2)Judicial Support for Virtual Justice: The courts have upheld the use of technology in dispute resolution, as they are willing to embrace the modern aspects of society. In video-conferencing, the Supreme Court ruled in State of Maharashtra v. Dr. Praful B. Desai, that the recording of evidence is permitted, and it is a crucial step in reducing the physical presence in the courtrooms. Courts have approved the service of legal notices via email and WhatsApp in situations where the conventional mode of communication does not work, since these modes have been approved as credible digital communication of the 21st century.These cases indicate that court procedures may take different forms and still retain their basic value as long as fundamental principles are still maintained.
THE CONSTITUTIONAL CONUNDRUM: ARTICLE 21 AND THE ‘BLACK BOX’
The big benefit of Online Dispute Resolution (ODR) reaches its highest position when Artificial Intelligence (AI) shifts out of being a supporting system to becoming a judgement system. The shift leads to basic questions based on Article 21 of the Constitution because it attacks the basic framework of fair trial methods.The protection of life and personal liberty by Article 21 has been taken to require a “fair, just, and reasonable” procedure, not just one prescribed by law.It includes the rules of natural justice (audi alteram partem and nemo judex in causa sua), right to a fair hearing, and right to a reasoned order. A reasoned decision is indispensable because it informs the parties as to the basis of the decision, enables effective review by the courts and establishes judicial accountability. The Supreme Court has made it clear that the dispensation of justice should not only be performed but should be demonstrated to be made, a principle that is endangered by clandestine automation.2. The Mystery of the Algorithm’s ‘Black Box’The systems that utilize Artificial Intelligence in decision making usually have the black box phenomenon in the complicated models such that the interrelationship of the input with the output cannot be explained even by the developers. This lack of clarity has a primary legal implication regarding the ability to explain.When an ODR system applies this type of Artificial Intelligence in decisions or suggestions, it violates the stipulation of a reasoned decision actively. The adjudicative logic is protected with multiple layers of proprietary code and this prevents the parties from having full due process. It is a violation of the basic concepts of participatory justice and full process of litigation, which converts legal cases to unintelligible machine operations. 3.Bias sneaks in, affecting fairness: This goes against the idea of equality found in Article 14.AI systems lack neutrality as they train using past data which contains social and historical biases. A model of AI trained using previous arbitration decisions will continue to apply discriminatory patterns in damages awards to cases involving female-owned businesses or particular social groups when such biases exist in its training data. This endangers the equality of law as provided in Article 14 because equal parties will be given different rulings due to an algorithm-based prejudice. The lack of transparency aggravates the situation and renders it difficult to identify, validate and contest this bias so that it remains immune to judicial scrutiny.4. The Gap in Responsibility and Careful Court Decisions Another problem is who should be held responsible when AI produces a wrong or biased decision. The laws now in force do not provide reliable guidance on who should be held responsible when AI systems (this can also be called automated systems) make mistakes, creating a gap of what to do. Because of this deficiency, it is difficult for the victims to obtain justice and a fair trial. Other courts around the world are being careful with AI. The use of automated systems in rights affecting decisions must not deprive the individuals of effective protections, in the view of the European Court of Human Rights. Through the viewpoint of these courts, it is apparent that India needs to close this accountability gap before AI-ODR becomes a lawless area.
THE STATUTORY CLASH: AI-ODR UNDER THE DPDPA, 2023
The law clash: AI-ODR under the DPDPA, 2023 AI-ODR platforms must also follow the Digital Personal Data Protection Act, 2023 (DPDPA) besides the constitutional requirements. The existing AI systems confront direct and potentially insurmountable obstacles due to several fundamental principles of the DPDPA. 1.Purpose Constraint and Data Reduction (Section 5):Section 5 of the DPDPA stipulates that personal data may only be collected for a specified and clear and lawful purpose and that only data necessary for that purpose may be collected (Data Minimisation). AI systems – especially machine learning models – usually work on a logic that is not in line with data minimisation. They perform better with more quantity and variety of data. Training an AI for dispute resolution may require a lot of personal data in thousands of cases in the past, which is more than the data required in the current dispute between two parties. This creates a fundamental compliance dilemma, where it is questioned whether such secondary use of historical case data in AI training is a Compatible Purpose in the Act. 2. A Look at Meaningful Consent (Section 7):The DPDPA mandates that consent must be uninfluenced, precise, educating, and unequivocal. Can an action by a user to agree to the lengthy Terms of Service of a platform be viewed as proper permission to complicated AI activities that will affect their legal rights? It is unlikely that the user will understand the full value of what is being offered, and, therefore, such consent may not be informed or meaningful. This is a case study in the concept of meaningful consent applied to typical digital contracts when there is an information and bargaining power gap.3. Right to Explanation (Section 16): Main Issue This is the biggest tension point. Section 16 gives the data principal the right to gain an explanation of the decision that the Data Fiduciary made based on automated processing. An ODR platform that applies AI to make or substantially influence a decision fall directly into this ambit. But the nature of advanced AI as a black box may make it technically impossible to offer a decision that can be easily understood beyond a superficial output. A confidence score is not sufficient to realize the objective of Section 16 which is to achieve transparency and allow significant challenge. The effect of the DPDPA at this point is a direct test of technological opacity, which may make this important right unenforceable in the ODR context.4. Important Responsibilities of Data Fiduciary and Risks of ExemptionODR platforms handle very sensitive dispute data like money information, personal messages, and identity documents. Because of this, large ODR companies will likely receive the Significant Data Fiduciary status from the Data Protection Board according to Section 10. The law enforces tougher rules on these platforms by requiring them to hire a Data Protection Officer (DPO) and perform Data Protection Impact Assessments (DPIAs). Section 17(2) and Section8(7) of the DPDPA present a legal threat because they allow the Central Government to excuse certain fiduciaries (including possibly state-owned entities) from several legal requirements. The use of selective exemption for government-owned ODR platforms would establish unfair legal advantages for state-affiliated platforms while damaging the universal protection framework of the Act.
RECOMMENDATIONS: TO THE PRINCIPLED GOVERNANCE FRAMEWORK
To make use of the advantages of the AI-ODR and at the same time to protect constitutional and statutory rights, an active and sophisticated regulatory approach is required. The subsequent actions may constitute the foundation of a governance framework that harmonizes innovation and accountability. 1.Mandatory Algorithmic Audits and Transparency Standards: ODR platforms utilizing AI to make choices must have obligatory, impartial third-party evaluations performed regularly. The evaluations need to be made public and must evaluate the algorithms with regard to fairness and bias and precision, and intelligibility. The law should require a basic level of functional explainability-the capability of stating the critical factors and justification of a decision in plain language- as a pre-requisite to implementing legally binding environments. This is not a black box system but a glass box system.2.A need for a “Human-in-the-Loop” requirement for binding awards: A legally-required Human-in-the-Loop system must be in place during each ODR process that produces a legally-binding arbitration award or decision. The ultimate ruling is to be made by a human arbitrator or mediator who has been certified. The adjudicator should be required to have extensive involvement with the AI output and make use of autonomous legal judgment and generate a self-sufficient, reasoned order describing the rationale of the decision in simple terms with reference to evidence and reasoning. According to Article 21 and Section 16 of DPDPA, this meets the requirement of a reasoned order and keeps human liability as a primary aspect of justice. 3. Helpful tips from the Data Protection Board and keeping things ethical from the start: The Data Protection Board of India that has just been established should publish clear rules regarding how the DPDPA applies to AI-supported decision platforms and ODR. They need to explain what a proper explanation under section 16 is, and how to get legitimate consent under section 7. There also needs to be a legal duty of care that obligates ODR providers and AI creators to actively follow the rules of ethical by design and privacy by design already established in international documents such as the EU’s Ethics Guidelines for Trustworthy AI (2019).
