< Previous | Contents | Next >

CHAPTER 15

Solving for Scale – Using AI and predictive

analytics for justice delivery

Joseph Pookkatt, Ashutosh Modi and Abhijeet Srivastava

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”

– Ginni Rometty, Former Chairman, President and CEO of IBM

SUMMARY

Predictive Analytics is that discipline within AI that makes predictions by identifying and analysing the patterns in data;

The use of predictive analytics in law seems to be promising since judges can use it in the decision-making process and solve the issue of pendency at scale;

However, studies have suggested that due to the lack of understanding, such tools could result in biased and discriminatory output;

It is also a challenge to decide who should be held accountable if an AI system fails and delivers an incorrect output;

It is imperative to organise training programmes for judges to use these tools.

Introduction

Timely disposal of cases is quintessential to justice, a basic constitutional right of every citizen. However, with over 50.2 million cases 1 pending in courts, the Indian judicial system finds it increasingly difficult to deliver speedy justice.

In recent years, the use of Artificial Intelligence (AI) has become all-pervasive. Given its perceived efficiency, computer scientists have sought to find use cases for the application of AI to improve the quality and efficiency of justice delivery and to assist judges in the judicial decision-making process.

Predictive Analytics is that discipline within AI that makes predictions by iden- tifying and analysing the patterns in data. 2 According to the Statistical Analysis System (SAS), predictive analytics is about – ‘revealing previously unseen patterns, sentiments and relationships’. 3 With the use of this technology, known data can be used to analyse the regularities within the pattern for predicting a future event. 4 The use of predictive analytics tools in the judicial decision-making process could be a significant game changer from the point of expeditious disposal of cases. However, the use of predictive analytical tools could bring to the forefront the various challenges to such technologies – liability, 5 ethical, 6 and technical chal- lenges. 7 Therefore, to reap the maximum benefit of this technology, it is crucial to develop well-designed Artificial Intelligence/Machine Learning models taking into consideration both technical implications and constitutional rights of the citizens of India.

In this article, we are looking at the potential benefits of using predictive ana- lytics to assist the Indian judiciary in the decision-making process, the various AI/ ML models in use, the international approaches, and the possible constitutional challenges to the use of predictive analysis in the judiciary.

I. Pendency of cases

Over 50.2 million cases are pending as of July 2023 in Indian courts. As shown in Figure 1 and Figure 2 below, between 2010 and 2020, pendency across all courts grew by 2.8 per cent annually. Of these 50.2 million pending cases, 87.6 per cent of these cases were in subordinate courts and 12.3 per cent in high courts. 8 In 2020, fewer cases were filed on account of COVID-19 lockdowns. However, the pendency increased because the disposal rate was slower than the filing rate. 9

II. Initiatives in India

The Supreme Court of India (SCI) has to date not publicly indicated its willing- ness to adopt AI in relation to the judiciary. However, the SCI spearheaded the initiative by launching SUPACE (Supreme Court Portal for Assistance in Courts Efficiency), an assistive tool for judges to read and extract relevant facts from case filings 10 . Justice L Nageswara Rao mentioned that SUPACE intended to develop a system where the software would analyse the filings and provide answers to factual questions that a judge may have while hearing a case. 11

image

Figure 1: Pendency of Supreme Court Cases (In Thousands)

Source: https://prsindia.org/policy/vital-stats/pendency-and-vacancies-in-the-judiciary

image

Figure 2: Pendency of Cases (In Lakhs)

Source: https://prsindia.org/policy/vital-stats/pendency-and-vacancies-in-the-judiciary

The SCI also launched SUVAS (Supreme Court Vidhik Anuvaad Software), a system that helps translate English case documents to Indian languages and vice versa. 12 However public records indicate that only 31 judgments were translated between March 2020 and December 2021. 13 Since it is a new initiative, transla- tors would be required to verify the translation done by the system, and Justice Rao noted that it is quite difficult for the court to find translators. 14 It is important to have translators for the system to work correctly.

Given the limited role of SUPACE and SUVAS in addressing the issue of pendency of cases, it is important and appropriate for us to deal with alternative approaches including the adoption of predictive analysis as an option that the courts in India could effectively use to address the issue of pendency of cases.

Predictive Analytics and its role in future litigation

Predictive analytics is not a distant prospect since corporations and academia have already begun deploying these tools in the legal sector. 15 Before explaining how courts and governments used predictive analysis tools in use cases, it would be appropriate to advert the available AI/ML models in use.

I. Techniques of Predictive Analytics – An overview

Supervised ML algorithms learn the relationship between input and desired out- put. For example, consider the task of predicting whether a given image is a cat or a dog. For this task, the ML algorithm would be fed with images of cats and dogs, and the algorithm would learn what are distinctive features/attributes (fur, colour, size, etc.,) that can differentiate a cat from a dog. The algorithm would learn the mapping from input to output via the features extracted from the input. In this example, the input is an image; the output is the label cat or dog. Supervised ML algorithms predict a label for the given input. Similarly, for performing prediction on text-based inputs, a typical process involves creating a corpus of documents annotated with corresponding labels, from a pre-defined list, that we would like the ML algorithm to predict. This corpus of annotated documents is fed as input to the ML model that automatically grasps the functional relationship between input text and output labels. This entire process is referred to as supervised learning. Given a new and unseen unannotated document, the ML model then predicts a label.

Due to its direct application to various use cases, predictive analytic techniques have been an active and prolific area of research in the past few decades. Different techniques have been proposed starting from classical machine learning tech- niques such as decision trees 16 , random forests 17 , kNN (k-Nearest Neighbors) 18 , Support Vector Machines (SVM) 19 and more recently, neural network - based deep learning techniques. In the past few years, deep learning-based models have dom- inated the scene, as these are data-driven and consequently, do not require explicit feature engineering. In classical (non-deep learning) ML approaches, a typical process involves manually extracting features (relevant attributes) from the input and using these features to predict the output. This manual extraction of features is referred to as feature engineering and is tedious and prone to errors.

In contrast, deep learning algorithms do not require feature extraction, but learn the relevant features automatically from the input, thus making these more robust and less prone to errors. Moreover, these models have shown better gener- alisation capabilities than the classical ML models. Predictive analytic techniques in the legal domain have also moved from the use of classical ML techniques to, more recently, deep learning techniques as outlined in section 4.2.

Models for predicting legal outcome

The use of AI in law has a long history. It began with the nearest neighbour algorithm and progressed to advanced models for predicting the outcome of cases. AI considers many aspects like substantive factual strengths, weaknesses, and rule-based issues. Recently, with the help of machine learning tools using neural networks, researchers predicted outcomes from case texts without recourse to traditional legal knowledge representation. 20 This section briefly examines some of the computational models for predictive analytics in the legal domain proposed in the literature.

A. Nearest Neighbour Algorithm

In 1974, MacKay and Robillard developed a computer programme for predicting the outcome of cases involving capital gains tax. 21 The model’s aim was to assist judges in predicting whether a gain was a capital gain or ordinary gain in a real estate transaction under Canadian tax laws. The predictions relied on 64 Canadian capital gains tax precedents that were represented in terms of 46 binary features. Each feature was based on facts that were found to be relevant to decisions on the issue by experts in previous studies. The researchers applied a k-nearest neighbour (k-NN) algorithm that calculated the similarity or dissimilarity between the fact patterns of cases and predicted the decision in a given case.

B. Case-based arguments model

While arriving at a verdict, judges rely on precedents for deciding the ongoing case. In 2016, Matthias Grabmair developed the Value Judgment Based Argumentative Prediction (VJAP) programme, which is a computational model of case-based legal argument. VJAP performs legal reasoning and applies value judgments across cases, mapping them from one factual scenario to another. It constructs arguments using a factual scenario based on given facts in a way that substantiates a particular conclusion as per the applicable values. 22

C. Machine Learning (ML) models

In 2017, Daniel Martin Katz and Michael Bommarito developed the first super- vised ML programme for predicting whether a US Supreme Court justice or the court would uphold or overrule a lower court’s judgment. 23 They deployed a ran- dom forest classifier for evaluating a case and predicting its outcome. The decision tree functions were trained on all previous decisions for that judge, the court, and other precedents.

D. Neural models for prediction on European Court of Human Rights cases

In 2019, Illias Chalkadis along with some renowned academicians proposed neural network-based models for prediction on three tasks for cases from the European Court of Human Rights (ECHR): human rights violation classifica- tion, human rights violation type classification and case importance regression. 24 The proposed model takes the facts of a case and performs predictions for each task. The paper shows the superior performance of neural models as opposed to classical ML models indicating the shift towards deep learning-based models for legal AI tasks.

One of the main differences between neural models and classical ML models is the features used for prediction. While the latter involves explicit feature engi- neering, deep learning grasps complex features automatically from the data itself. Moreover, the performance of deep learning models improves with the availability of more data (input-output pairs) as the model gets the chance to learn possi- ble variations in the existing features, and learns new features that could make predictions.

E. Court judgement prediction with explanation

Recently, Vijit Malik along with some renowned academicians proposed a deep learning-based hierarchical transformer model for predicting the outcome of SCI cases. 25 The proposed system takes a court case document as input and predicts if the appeal is accepted. Additionally, their model explains how it arrived at a particular decision by indicating the salient sentences in the document that lead to the predicted decision. Typically, it is difficult to interpret the predictions done by deep learning models and the model proposed in the aforesaid project tries to overcome the Black Box limitation of a deep learning model. This is the first step towards developing explainable AI models that could aid in assessing the accountability of an AI system (see also Section 6.1.1).

II. Use of predictive analytics by foreign courts

There are various approaches taken by foreign countries while deploying predic- tive analytic tools.

a. Estonia: Estonia uses predictive analysis tools only in low-value cases with de novo appeal to a human judge. The Estonian Ministry of Justice requested Ott Velsberg, the chief data officer, to assist in developing an AI-driven model for helping mediators adjudicate small claim disputes of less than Euro 7000. 26 This AI-powered tool, with the help of machine learning mod- els, identifies relevant supporting case laws similar to the issue beforehand, to generate a legal memorandum for providing the decree. It is pertinent to mention that this decree could be appealed before a human judge in case any party is aggrieved from the AI-issued decision.

b. China: China has started adjudicating cases pertaining to theft and motor vehicle cases wherein the decision parameters are simple and clear through predictive analytic tools. The judges simply upload the complaints or the case files to get the AI-generated preliminary judgments. The AI model is pre-trained with a corpus of 40 million case judgments. This corpus is used by the AI model while generating the judgments and helps identify earlier, similar judgments on the same issues.

c. Brazil: Victor is a project sponsored by the Federal Supreme Court and University of Brasília which seeks to assist the Brazilian Supreme Court by analysing the constitutional admissibility of ongoing cases 27 and for speed- ing up the analysis of pending cases.

d. The US: It has started using COMPAS to provide decisional support to judges while dealing with criminal cases. It assists judges in determining whether an accused should be detained during pretrial or during sentenc- ing. COMPAS generates risk scores displayed in the form of a bar chart, with three bars that represent pretrial recidivism risk, general recidivism risk, and violent recidivism risk. Each bar indicates a defendant’s level of risk on a scale of 1 to 10. 28

Constitutionality of Predictive Analytical tools

The use of predictive analytics in law seems to be promising since it can be used to assist judges in the decision-making process and consequently may solve the issue of pendency at scale. 29 Given this, it is essential to determine the constitu- tional validity of these tools considering Article 21 of the Constitution of India, 1950. In this view, authors will 1) Interpret Article 21, 2) Discuss the implications of predictive analytical tools on the constitutional rights of the accused, and 3) Analyse the foreign case laws on the constitutional validity of predictive analytical tools and their persuasive impact in India.

I. Interpreting Article 21 of the Constitution of India, 1950

According to Article 21, ‘No person shall be deprived of his life or personal liberty except according to procedure established by law’ . It is important to understand three major expressions used in this article i.e., ‘life’, ‘personal liberty’ and ‘procedure established by law’. The expression ‘life’ has been interpreted liberally and broad- ly. 30 The SCI has interpreted and held that the expression ‘life’ is something more than mere animal existence. 31 On the other hand, the expression ‘personal liberty’ has been given a very wide amplitude and refers to not only freedom from arrest or detention but also all those varieties of rights of a person which go to make up the personal liberty of man. 32 Finally, any deprivation of ‘life’ and ‘personal liberty’ shall only be as per the relevant ‘procedure established by law’. 33 The SCI in several cases has interpreted the expression ‘procedure established by law’ and held that such a procedure must satisfy the requisite of being fair and reasonable. 34 Therefore, a person should not be deprived of his life and personal liberty based on any arbitrary, unfair and unreasonable procedure. 35

II. Implications of predictive tools on the rights of the accused

Presently, predictive analytical tools are based on deep learning techniques that are purely data-driven decision-making algorithms. 36 Such tools consider the accused as statistics rather than individuals. 37 Various studies have also suggested that due to the lack of understanding such tools could result in biased and discriminatory output. 38 Since these tools are opaque, it is impossible to determine the rationale of the final output provided by these tools, which is commonly known as the issue of Black Box. 39 Thus, an accused may be sentenced based on automated deci- sion-making tools and may never get an opportunity to understand the rationale of such a decision or how the tool concluded. 40 This can have serious implications on the constitutional rights of the accused because if a judge relies on an errone- ous/biased/discriminatory output given by predictive analytical tools, the accused may be deprived of his life or personal liberty. The issue raises concerns especially when such tools are being deployed to determine bail or sentencing. Therefore, the question arises whether such Black Box methodology meets the criteria of ‘procedure established by law’ laid down under Article 21 of the Constitution of India, 1950.

III. Foreign precedents on the constitutional validity of predictive analytical tools

To answer the above question, it is important to first analyse some foreign case laws where the use of predictive analytical tools was constitutionally challenged before the courts. We throw light on how foreign courts dealt with this issue in the backdrop of constitutional principles.

A. State v. Loomis

The State v. Loomis 41 is one of the most relevant cases on the aforementioned issue. In 2013, Eric Loomis was charged for a drive-by shooting case in La Crosse. 42 He accepted his involvement in driving the car but denied his involvement in the shooting. Following Loomis’s plea, the trial court ordered a pre-sentence investigation report which included the output from the Correctional Offender Management Profiling for Alternative Sanction (COMPAS) risk assessment tool. 43 It generated a risk score that assisted judges in the sentencing process. Based on this output, the trial court sentenced Loomis to imprisonment and extended five years of supervision. 44

Aggrieved by the decision of the trial court, Loomis filed an appeal before the Supreme Court of Wisconsin (SCW) and challenged the trial court reference of COMPAS in the decision-making process as it violated his constitutional right to due process. Loomis argued that since COMPAS utilised a proprietary AI model, it could not be inspected by independent researchers as there is no access to the corpus based on which COMPAS predicted the risk scores. Therefore, the use of such tools violated his constitutional right as he was barred from verifying the accuracy of the decision given by the model. 45

However, the SCW rejected all the claims by Loomis. It held that if the COMPAS was used as a determinative factor then such use would have vio- lated the ‘due process’ rights of the accused. However, in this case, the trial court provided an adequate explanation of the risk score provided by COMPAS and also examined the other relevant factors of the case. Thus, the risk assess- ment scores of COMPAS were ‘not determinative’, but rather merely used as an observation to reinforce the trial court’s assessment of other factors. The SCW also considered the issue of the appropriateness of predictive analytical tools like COMPAS in the decision-making process. 46 Subsequently, in 2017, Loomis filed a writ of certiorari before the Supreme Court of the United States (SCU) to challenge the decision of the SCW. However, the court dismissed his petition. 47

B. People v. Younglove

In 2019, in a consolidated appeal, defendants challenged the use of COMPAS in the judicial decision-making process. 48 Defendants argued that any reference to COMPAS violates the ‘due process of law’ as COMPAS statistically analyses data from a general population while making predictions. 49 Defendants said that the use of predictive tools is inappropriate as it violates the right to individualised sentences and also lacks transparency. Further, the defendants also argued that by using such tools, the judiciary is transferring their judicial discretion to software developers. The State of Michigan Court of Appeals rejected all the claims and held that the use of COMPAS by no means affected the right to get an individ- ualised sentence as it is only one of the factors used during the decision-making process. 50

In our view, the above foreign judgments could be considered persuasive authorities on this issue. Further, for legitimising the use of such tools in assisting judges, legislatures need to make sure that the output provided by such tools is only used as an observation by judges, and not as a determinative factor. The judges would also be required to provide adequate rationale if they rely on such predictive analytical tools for arriving at a decision.

Demystifying the challenges and limitations of an ai enabled justice

While the deployment of predictive analytical tools may promise greater fairness, access to justice, and legal certainty, there are risks and challenges of leveraging such tools. In this section, the authors shall analyse the potential issues and challenges of using AI in the judicial system mainly in the context of: a) Determining the liability of a faulty AI system, b) Ethical issues, and c) Technical challenges.

I. Determining the liability of a faulty AI

Predictive analytical tools may be useful in assisting judges, but sometimes they do give incorrect results. This may harm individuals especially if a judge based on the incorrect output, convicts or penalises a person who may actually be innocent. Thus, the question of determination of liability for faulty predictive tools becomes sine-qua-non. Presently, due to inadequate framework, the judicial system is forced to adapt to contracts, torts and product liability laws. 51 However, in the absence of an appropriate mechanism for the identification of fault and causation, it becomes difficult to test the boundaries of such laws. 52 In this section, we shall address the potential challenges associated with such tools at the time of deter- mining the liability of these technologies.

A. Allocating liability in the complex AI ecosystem

AI is part of technology that combines data-driven models driven by the inter- net and facilitated by blockchain technologies. 53 Data is continuously created, exchanged, analysed, pooled, and reassessed. 54 Each of these advanced technolo- gies carries its own independent risk. Consequently, when combined with each other, it becomes difficult to allocate the liability as it becomes layered and com- plex. 55 Thus, it may be inherently difficult to reverse engineer the decision-making process of these tools to determine why AI arrived at a given output. 56 Recently, explainable AI (XAI) technologies are being developed which would enable a system to explain its prediction. In the case of an error, this will help assess the faulty component. 57

B. Assigning responsibility among multiple parties

The second challenge is to decide who should be held accountable if an AI system fails and delivers an incorrect output. This requires a multi-layered assessment due to the involvement of several parties. There are AI developers, algorithm trainers, data collectors, controllers, and processors, owners of the software, and the final users of the devices. 58 The issue also becomes complex when these technologies communicate and engage with one another because the propensity for errors shifts as data is constantly created, exchanged, pooled, analysed, and reassessed. 59

C. Inadequate legal remedies

Tort remedies are currently limited because AI is classified as intangible property. 60 Both AI/ML models are considered trade secrets rather than products, which pre- cludes the aggrieved person from seeking remedy within the purview of product liability laws. 61 Moreover, due to a lack of privity between individuals and third- party software companies, the affected person may not be able to seek recourse for any claim including, but not limited to defamation, invasion of property, or breach of duty. The privity of contracts between the software companies and the state will always shield the company with various indemnifications and disclaimer clauses. 62

II. Ethical Issues

Deployment of predictive analytical tools may also lead to various ethical issues. In this section, the authors focus mainly on the issue of: a) Black Box, b) Bias, and c) Accountability.

A. Black Box

Several studies suggest that AI systems today are mostly opaque. 63 Such systems may be successful in making predictions and decisions on behalf of human beings, but they fail to communicate the rationale behind the decisions. This is typi- cally called the Black Box Problem. 64 The inability to provide rationale is mainly because such systems consist of multiple layers of interconnected, artificial com- puting units (referred to as neurons) that analyse the patterns within the data. 65 At least 1 lakh neurons operate simultaneously to arrive at a final decision. 66 It is a layer or cluster of neurons that encode some features extracted from the data but is not intelligible for analysis by experts. 67 Further, knowledge embedded in these neurons cannot be reduced to a set of instructions, nor can any neuron or group of neurons determine what the system finds interesting or important. 68 Its power comes from ‘connectionism’. 69 The complexity of the large multi-layered networks of neurons is what gives rise to the Black Box Problem.

The issue may be of concern in the legal domain because it is contrary to the principles of natural justice, which dictate that a decision in a dispute should be based on reasons. 70 The disclosure of the reasons behind a decision can act as a safeguard against arbitrariness. The affected parties may be deprived of the understanding of how a decision was made, which may give rise to questions as to the constitutionality of such tools.

B. Bias

Bias is an inevitable consequence of algorithmic decision-making systems. 71 There are multiple points when bias could be introduced within the algorithmic decision-making process i.e. the input data, the design or performance of the algorithm itself or the way in which the output is acted upon by human involve- ment. 72 Among all these entry points, the most critical issue of bias is when it is within the training data itself. 73 This mainly gives rise to the issue of accuracy at the time of deployment of such tools. Similar concerns were identified by ProPublica regarding the use of COMPAS by US courts for predictive recidivism in sentencing as well as bail decisions. Due to the racial bias in the training data itself, the final output given by the predictive analytical tools was also found to be biased against individuals of colour.

C. Accountability

This consideration mostly arises due to the opaque forms of AI in the deci- sion-making process which are influenced by factors such as data used for train- ing, algorithms, processes, training parameters, and deployment environment, among others. 74 Multiple entities may be involved during the development and deployment process. The ‘many hands problem’, allied with complex AI systems, confuses the issue of assigning liability under extant laws of accountability and legal recourse. 75 The AI system coupled with several interconnected factors behind individual decisions makes it challenging to attribute errors and assign responsibilities. 76

III. Technical challenges

The legal text is different from texts that are typically used to train deep learn- ing-based text models. Legal documents are typically long (on an average of> 3000 tokens), the legal lexicon is different, and legal texts are typically unstructured and noisy. For instance, legal documents in India are usually typed manually. All these challenges make it difficult to adapt (e.g., via transfer learning techniques) existing state-of-the-art (SOTA) language models to the legal domain. 77 Consequently, predictive models for the legal domain need to be developed from scratch. This requires the annotation of thousands of documents by legal experts, which is extremely time-consuming and expensive. Though recent developments in the deep learning community (e.g., transformers) have made it possible to fine-tune techniques on small quantities of data, even these need to be annotated.

Currently, most deep learning models are Black Boxes, however, as outlined earlier, the legal domain requires a model that can explain the reasons for the prediction. This is technically challenging, as current deep learning models have billions of parameters and cannot be interpreted. This requires the development of specialised techniques to explain models in the legal domain.

Moreover, India is a multi-lingual society and courts at the lower levels like district courts, work in the regional language. Current SOTA has primarily been developed for English, and these do not work in other languages. Developing ML models for other Indian languages becomes challenging as these are low-resource and not annotated. Hence, considerable effort needs to be devoted to multi-lin- gual technologies.

Recommendations and safeguards

Predictive analytical tools offer many opportunities for quick and effective judicial decisions. However, as discussed in the previous section, the deployment of such tools also poses various challenges and limitations in society. To reap the benefits while avoiding such challenges, a proper regulatory framework is sine-qua-non. In this pursuit, the present section enumerates different building blocks for legis- latures to create such a framework.

I. Judicial training to use predictive analytical tools

A safeguard that must be implemented in the judicial system is to mandatorily organise training programmes for judges to use these tools. 78 To begin with, the right direction would be to follow the Loomis recommendations i.e. forbidding sole reliance on predictive tools and including a warning about the flaws of such tools to the judges. 79 In India, before the deployment of such tools, courts need to ensure that even if judges rely on such tools, these tools should be only a part of the decision-making process rather than the whole. This would ensure the elimination of possible constitutional challenges as witnessed in the USA.

Further, there is a need to increase the judges’ grasp of how these predictive analytical tools function. Justice Abrahamson in her concurring remarks on the Loomis case recognised a similar issue and recommended that judges need to be made aware of the basic functioning of these tools. 80 Such training programmes for increasing the awareness of judges in a specific domain are not a new concept. In the US, similar programmes are being conducted for training federal district judges in scientific theory and methodology to enhance their ability to assess the reliability of expert testimony. 81 Thus, judicial academies may implement such training sessions or workshops to create awareness about predictive analytical tools. Such tools may help judges in assessing the reliability of automated decisions, and may thereby mitigate the effects of automation bias and increase transparency. 82 This would ultimately ensure that judges are not being swayed by algorithmic decisions and are not rubber stamps of automated decisions. 83

II. Designing AI models on the Fairness, Accountable and Transparent (FAT) framework

A proper regulatory oversight mechanism needs to be established that ensures developers create predictive tools that lead to more transparent and fair decisions and thereby avoid discrimination, bias and inaccuracy. The fairness of such AI-enabled tools could be measured based on accuracy (rate of right prediction), recall (capability to locate related results) and precision (potential to produce exact results). 84

Further, due to the ‘many hands problem’, the debate over the accountability of predictive analytical tools is daunting. Diakopoulos and Friedler make some salient suggestions to increase the accountability of such tools. They first stress that since these tools are built by developers, they should in fact take responsibil- ity for errors. Second, legislatures may need to formulate an authority comprising judges, technical experts and academicians to continuously identify, log, ensure accuracy and benchmark the source of errors. These internal checks would help acknowledge and understand the flaws of predictive analytical tools. Using this information, developers may redesign the AI/ML models to minimise any possi- ble errors at the time of deployment of such tools in the judicial system. 85

Finally, the solution to the opacity of AI/ML models could be resolved by making such tools more transparent to enable users to understand how a decision/ prediction is made by the models. Studies have suggested the creation of source code on widely accessible resources like Github to enable public scrutiny and inspections. 86

The authors also recommend that the right direction may be to formulate a regulatory framework in lien of New York’s first algorithmic accountability law which came into effect in 2018. 87 It aims to make automated decisions fair, accountable and transparent. The Act creates a task force agency to identify the disproportionate impacts of an algorithmic automated decision-making system. 88 It requires that the agency decisions should be archived so that the public can meaningfully assess the AI systems. 89 It also gives the right to an individual affected by an automated decision to request an explanation for the decision and will require a path for redressals for those harmed by a decision. 90

III. Auditing and certification mechanisms

There should be regular audits or certifications for assessing and monitoring the validity of AI/ML models. 91 This has also been recommended by Article 29 Working Party of the GDPR for testing the efficacy of algorithms or automated data processing systems. Various studies have suggested that adopting indepen- dent timely audits may be the best practice for automated decision-making tools to ensure compliance and protection of the constitutional rights of individuals. 92

IV. Promoting research and

Before the deployment of predictive tools in the judicial decision-making process, there is a need for high-quality research to facilitate informed policy decisions for such technologies. Although there is foreign literature available on this subject, given the fact that India has its own social and cultural environment, it is import- ant to critically examine the impact or challenges before deploying such tools in the Indian judicial system.

Conclusion

Any deployment of predictive tools in the Indian judicial system should be exper- imented with on a trial basis, initially on certain categories of civil and com- mercial cases, where the parameters of the decisions could be applied uniformly and where no discretionary power is vested in judges. These cases include motor vehicle cases, traffic violation cases, product liability cases for food adulteration, legal metrology, insurance and banking-related cases. The final output given by such predictive analytical tools should only be considered as one of the factors in deciding the case, and the ultimate decision -- to penalise or acquit the accused

-- should be solely taken by judges. Therefore, given the complexity posed by predictive analytical tools, it is essential to create a framework for regulating the use of artificial intelligence in the judicial sector so that these tools are both safe and effective.

Editors’ Comments

Predictive analytics is just one of the methods by which analytics can be pressed into the service of the judiciary. The next two chapters of the volume describe how Operations Research, a discipline centered around modelling, analysis, and optimisation through data and sophisticated analytics, can serve the judi- ciary. The first of these, chapter 16 provides an overview of the possibilities and chapter 17 discusses a specific use case. Finally, this part ends with a dis- cussion of some hacks of statistics: All analytics depends on data processing and analysis. Accordingly, a chapter to highlight some key and fundamental concepts while dealing with data is in order. The next chapter serves this pur- pose by bringing forth the pitfalls that must be avoided while leveraging data for decision-making.

References