One of the quintessential examples of AI adoption in the administrative stage can be seen in the form of the collaboration between the Supreme Court and IIT Madras, where advanced machine-learning tools have been developed to assist in detecting defects in filings and extracting metadata from pleadings. This proto-type is currently being used by over 200 Advocates-on-Record (AOR) with a plan to integrate these tools into the Supreme Court’s Integrated Case Management & Information System (ICMIS) for a full-fledged deployment. These AI models have been designed to scan pleadings for missing annexures, thereby directly affecting whether filings are accepted or returned to litigants. Nyaay.AI is another AI tool that is claimed to be used by the Supreme Court and 16 out of 25 high courts, making it one of the most widely adopted AI systems in the India judicial ecosystem. The platform not only assists in basic document search but also offers automated data extraction from pleadings, AI-driven defect detection during filing, case clustering, and bench allocation – designed to make courts more efficient and accessible.
One of the core issues with AI being used at the scrutiny stage is that of accuracy. If the tool misreads a scanned annexure as missing or misinterprets paging, it can lead to delay in the registration of the case.. A similar conundrum related to digital infrastructure failure, although not specifically related to AI, was seen during the breakdown of the e-Jagriti consumer-justice portal. As per a PIL filed in the Punjab and Haryana High Court, this created a nationwide digital lockdown in the consumer legal justice system and prevented access to justice for lakhs of consumers.
Another cause of concern is the lack of transparency. Publicly available descriptions of the AI tools deployed by the courts do not explain how AI tools make decisions. Since courts are responsible for delivering justice and upholding the rule of law, they need to understand how an AI tool decides for example, that one matter takes priority over the other, what categories the model uses to determine case-type or urgency. Apart from this it is necessary for lawyers and litigants to be informed of the use of AI on their case documents or hearings. At present, litigants and even most lawyers simply do not know what the algorithm is checking, what threshold triggers a defect, or how it decides that a case is non-urgent.
The tender documents for procuring AI systems issued by the Supreme Court and various high courts show a striking pattern. AI is being procured as if it were another IT service. The Expression of Interest (EOI) / Request for Proposal (RFP) documents repeatedly frame AI as a tool for efficiency, speed, and consistency. AI systems are positioned as gatekeepers to the judicial process but without any corresponding mechanisms for audits, contestation or error correction. There is no mention of the fact that scrutiny decisions, categorisations, or auto-generated objections have due-process implications under Articles 14 and Article 21. Instead, they are just treated as mere back-office optimisation tasks, even though they effectively decide who gets heard, within what limitation period, and under what category. A cumulative reading of these tender documents show that India is constructing an AI-assisted judicial infrastructure through procurement language rather than constitutional design. Details are exhaustive with respect to hardware, bandwidth, project turnover, staffing patterns, but almost silent on rights, remedies, transparency, accountability, and contestability.
When it comes to accountability infrastructure, Kerala High Court’s policy regarding use of AI tools in district judiciary is the most explicit judicial policy on AI in India. It requires systematic documentation and audit trails for any AI usage and mandates subjecting outputs to human verification. However, no similar audit framework is publicly visible for national-scale tools like the Supreme Court’s defect-detection prototypes or Nyaay AI deployments across multiple High Courts. There is no published data on error rates by case type, language, or court, no reporting of false positives in defect detection, and no litigant-facing mechanism to request an explanation for cases being flagged as defective, categorisation under certain case-type, etc. What we have today, outside Kerala, is effectively a black box.