Roster Reform in the Supreme Court: Algorithms on the Horizon
India’s Supreme Court moves to AI-based roster allocation. Will this reform ensure transparency or create a new judicial black box?
In India’s Supreme Court, who hears your case matters just as much as what your case is about. The Chief Justice has traditionally made the decision regarding which bench receives which matter. Formally, this role is known as the ‘master of the roster.’ Outside legal circles, few discuss it, but within those circles, it is a frequent topic of conversation. For decades, this system has behaved like a black box: impactful, unclear, and hard to challenge.
The Court is now seeking to change this. Chief Justice Surya Kant has decided that AI will manage case listings and bench allocation, effectively removing human involvement from this process. This move came after a clear and embarrassing failure: a petition already dismissed by a three-judge bench in 2022 reappeared on the cause list for a new bench. CJI Kant was understandably upset. An internal investigation followed, as did an unprecedented wave of transfers within the registry officials, who had held their positions for years and had covered administrative failures, including improper case allocation.
In simpler terms, the catalyst for this change was not theoretical. It was a system caught failing in front of everyone.
This background is important. It’s easy to criticise reform without understanding what it addresses. This isn’t a case of technology being adopted for its own sake; it’s a response to documented problems, outdated infrastructure, and a registry that has, over time, become part of the issues it was supposed to solve.
In January 2018, four sitting judges made the seriousness of these concerns impossible to ignore. Justices Jasti Chelameswar, Ranjan Gogoi, Madan B. Lokur, and Kurian Joseph held an unprecedented press conference for sitting judges. They warned that assigning ‘cases of far-reaching consequences’ to specific benches could threaten democracy. They claimed the roster wasn’t just an administrative tool; it was a form of institutional power operating without enough accountability.
CJI Kant's AI initiative partially responds to these enduring concerns. It offers standardised allocation and rule-based routing. This system won’t depend on which registry official is managing a file. These improvements deserve recognition.
However, the key question is not whether this reform is well-meaning—it clearly is. The real question is whether it’s enough and whether, in fixing one issue, it risks creating another.
When a person decides something, that decision can be reviewed and challenged. When an algorithm makes a decision, examining it requires access to reasoning that is often kept private. If the criteria governing the AI, such as how it weighs subject matter, urgency, and bench composition, are not disclosed or if their functioning cannot be independently verified, significant risks arise. These include the potential for undetected biases, accountability gaps, or errors that are difficult to identify or correct. Thus, if these criteria remain unknown, we do not reduce the system's opacity, but merely shift it.
The Supreme Court itself has raised concerns about using AI in judicial decisions, noting that technology cannot replace constitutional judgment. This caution is wise. It applies equally to algorithmic choices about which judges hear which cases and decisions that shape the environment for constitutional judgment.
This does not mean we should oppose the initiative. The 2022 listing issue that prompted CJI Kant's move was exactly the kind of mistake a rule-based system should avoid. The registry transfers indicate that the Court is serious about dismantling the human networks that allowed for such failures. Both actions are steps in the right direction.
However, a well-functioning system and an accountable system are not the same. The criteria must be made public. The logic must be accessible. Litigants should understand why their cases were listed as they were, and there must be someone responsible when results seem incorrect.
Without this transparency, even a functioning system poses a risk: the same power concentrated in new hands, with better security measures.
The choice isn’t between human judgment and algorithmic neutrality. Algorithmic neutrality does not exist. The choice is between a black box that can be questioned and one that cannot.
CJI Kant has pinpointed the correct problem and taken decisive action to address it. What happens next, whether the system is designed with transparency or just efficiency, will determine if this is genuine reform or a more advanced version of the same old system.
The article is written by Utkarsh, a Ranchi-based journalist reporting on law, labour, and policy, with a focus on the intersection of rights and governance. His work has been featured in The India Forum and Feminism in India.