Daksh

Search
Close this search box.
Search
Close this search box.

The Silent Gatekeeper

I. Introduction

Most debates on the deployment of artificial-intelligence (AI) in Indian courts these days focus on the visible and attention-grabbing concerns like hallucinated case law, presence of deepfakes in evidentiary records, Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) assisted research, or experimental live transcription tools. Yet some of the earliest developments in judicial AI adoption have emerged not in these highlighted spaces but in largely unexamined and mundane administrative processes such as e-filing, registry scrutiny and case categorisation. The activities that are part of these  stages take place in what would be the backstage of a court, in the registry. Yet these activities have a significant role in processing of case files. They might not produce dramatic courtroom moments, but they silently support the everyday functioning of the courts: which matters receive urgent listing, how cases are categorised, and which filings get flagged as defective. Technical shortcomings at such a level can hinder access to justice and carries great structural risks but still remain the least scrutinised.

II. Methodology

The Under-Discussed Stages and their Pain Points

One of the quintessential examples of AI adoption in the administrative stage can be seen in the form of the collaboration between the Supreme Court and IIT Madras, where advanced machine-learning tools have been developed to assist in detecting defects in filings and extracting metadata from pleadings. This proto-type is currently being used by over 200 Advocates-on-Record (AOR) with a plan to integrate these tools into the Supreme Court’s Integrated Case Management & Information System (ICMIS) for a full-fledged deployment. These AI models have been designed to scan pleadings for missing annexures, thereby directly affecting whether filings are accepted or returned to litigants. Nyaay.AI is another AI tool that is claimed to be used by the Supreme Court and 16 out of 25 high courts, making it one of the most widely adopted AI systems in the India judicial ecosystem. The platform not only assists in basic document search but also offers automated data extraction from pleadings, AI-driven defect detection during filing, case clustering, and bench allocation – designed to make courts more efficient and accessible.

One of the core issues with AI being used at the scrutiny stage is that of accuracy. If the tool misreads a scanned annexure as missing or misinterprets paging, it can lead to delay in the registration of the case.. A similar conundrum related to digital infrastructure failure, although not specifically related to AI, was seen during the breakdown of the e-Jagriti consumer-justice portal. As per a PIL filed in the Punjab and Haryana High Court, this created a nationwide digital lockdown in the consumer legal justice system and prevented access to justice for lakhs of consumers. 

Another cause of concern is the lack of transparency. Publicly available descriptions of the AI tools deployed by the courts do not explain how AI tools make decisions. Since courts are responsible for delivering justice and upholding the rule of law, they need to understand how an AI tool decides for example, that one matter takes priority over the other, what categories the model uses to determine case-type or urgency. Apart from this it is necessary for lawyers and litigants to be informed of the use of AI on their case documents or hearings. At present, litigants and even most lawyers simply do not know what the algorithm is checking, what threshold triggers a defect, or how it decides that a case is non-urgent. 

The tender documents for procuring AI systems issued by the Supreme Court and various high courts show a striking pattern. AI is being procured as if it were another IT service. The Expression of Interest (EOI) / Request for Proposal (RFP) documents repeatedly frame AI as a tool for efficiency, speed, and consistency. AI systems are positioned as gatekeepers to the judicial process but without any corresponding mechanisms for audits, contestation or error correction. There is no mention of the fact that scrutiny decisions, categorisations, or auto-generated objections have due-process implications under Articles 14 and Article 21. Instead, they are just treated as mere back-office optimisation tasks, even though they effectively decide who gets heard, within what limitation period, and under what category. A cumulative reading of these tender documents show that India is constructing an AI-assisted judicial infrastructure through procurement language rather than constitutional design. Details are exhaustive with respect to hardware, bandwidth, project turnover, staffing patterns, but almost silent on rights, remedies, transparency, accountability, and contestability. 

When it comes to accountability infrastructure, Kerala High Court’s policy regarding use of AI tools in district judiciary is the most explicit judicial policy on AI in India. It requires systematic documentation and audit trails for any AI usage and mandates subjecting outputs to human verification. However, no similar audit framework is publicly visible for national-scale tools like the Supreme Court’s defect-detection prototypes or Nyaay AI deployments across multiple High Courts. There is no published data on error rates by case type, language, or court, no reporting of false positives in defect detection, and no litigant-facing mechanism to request an explanation for cases being flagged as defective, categorisation under certain case-type, etc. What we have today, outside Kerala, is effectively a black box.

III. Conclusion

The judiciary needs to shift to a pro-active governance model that recognises the pre-judicial administrative layer as the real frontier. Courts must ensure that AI systems used in the court meet the same standards of fairness and accountability expected from human decision-makers. For example, if  AI is used for scrutiny and listing, the court must publish defect-detection criteria, urgency rules for case listing, and classification logic, in addition to bias and accuracy audits. There needs to be a clearly laid down rights-based mechanism for the litigant to challenge algorithmic decisions at the filing or listing stage. Black box administrative classifications must be replaced with transparent, reviewable reasons. If Indian courts act to establish a rights-based AI  governance framework, India can set a global standard for how courts adopt and deploy AI.

 

SHARE

© 2021 DAKSH India. All rights reserved

Powered by Oy Media Solutions

Designed by GGWP Design

+ posts