Equipping the Judiciary to Harness the Power of AI
Abstract
The panel explored how artificial intelligence can be integrated into India’s overloaded judiciary while safeguarding rule‑of‑law values, human rights and multilingual access. Participants examined risk‑categorisation, data‑protection, transparent procurement, capacity‑building for judges, curriculum reform for law schools, and concrete AI products such as transcription tools and multilingual chat‑bots. UNESCO presented its 2023 survey (showing 90 % of judges use AI without training) and unveiled a new policy brief. The session concluded with a lively audience Q&A covering liability, “law for AI”, point‑solution use‑cases, and whether AI should eventually become mandatory in courts.
Detailed Summary
- The Assistant Director General (ADG) of UNESCO opened the session, noting that a major UNESCO brief would be launched later.
- Because of a scheduling conflict, the brief would be presented after the panel discussion.
2. Panelist Contributions
2.1 Ms. Jhalak Kakkar – Ethical & Procedural Safeguards
| Topic | Key Points |
|---|---|
| Risk‑based categorisation | AI can assist administrative tasks (case‑scheduling, docket management) and substantive tasks (research, translation, decision support). Higher‑risk uses (e.g., AI‑driven decision‑making or translation errors) demand stricter oversight. |
| Auditing & assessment | Each risk tier should trigger a tailored audit (bias, transparency, explainability). Existing AI‑audit frameworks need to be adapted for the judiciary. |
| Data‑management | The Indian judiciary holds a massive, sensitive data trove. Robust data‑protection and privacy practices are essential before sharing data with startups or researchers. |
| Third‑party procurement | Courts lack in‑house AI expertise and will rely on vendors. Public procurement must be open, transparent, with clear evaluation criteria and periodic external audits to preserve public trust. |
| Balancing efficiency & constitutional rights | Efficiency must not eclipse due‑process, rule‑of‑law or multilingual rights. AI tools should be deployed cautiously, ensuring they do not undermine constitutional protections. |
2.2 Follow‑up on Multilingual & Constitutional Context
- AI systems trained on English data may not serve India’s multilingual reality.
- Capacity‑building for judges (MOOCs, UNESCO‑NLU Delhi AI‑law courses) is vital to convey that AI is socio‑technical and can embed bias if trained on skewed datasets.
2.3 Prof. Srikrishna Deva Rao – Adapting Legal Education
| Theme | Insights |
|---|---|
| Curriculum redesign | Legal education must add technology‑literacy (AI, data science) alongside traditional doctrinal subjects. |
| Multilingual teaching | Introducing AI‑driven translation can help teach law in regional languages, reducing the justice gap. |
| Clinical & experiential learning | Proposes a law‑technology honors track (second‑year onward) and a “clinical legal education” model where students work on real AI‑enabled projects (e.g., online dispute resolution). |
| Paralegal & community outreach | AI can boost legal‑literacy campaigns in underserved languages, and support community‑based dispute resolution. |
| Stakeholder collaboration | Curriculum development must involve law schools, law firms, IT firms, and civil‑society organisations. |
2.4 Arghya Bhattacharya – Adalat AI Solutions
| Application | Description |
|---|---|
| WhatsApp chatbot | Multilingual, citizen‑facing bot that, given name/pincode, tells users the status of their case and next steps. |
| Real‑time transcription | AI‑driven speech‑to‑text removes the need for handwritten notes during witness examinations; reported 2‑3× productivity gains and potential 30‑50 % reduction in case‑resolution time. |
| Fairness & non‑discrimination | Emphasises that AI should not provide legal advice; systems must acknowledge uncertainty (“I don’t know”) and be transparent about their limits. |
| Language coverage | Working to support all Indian languages & dialects to avoid English‑only bias. |
| Caution on generative AI | Advises against using large‑language‑model outputs for sensitive legal reasoning; prefers narrow, auditable tools (e.g., transcription, document navigation). |
2.5 Dr. Tawfik Jelassi (UNESCO) – Global Perspective & Policy Brief
- 2023 UNESCO survey: 90 % of judiciary professionals worldwide use AI without any training or guidelines – a “frightening” gap.
- Risks of biased data (gender, ethnicity, language, geography) can mislead judges.
- UNESCO capacity‑building: More than 10 000 judges/prosecutors trained via MOOCs on AI & the rule of law; over 36 000 trained on freedom of expression & journalists’ safety.
- Human‑rights‑centered AI principles: Emphasises human dignity, equity, ethics as non‑negotiable foundations for AI in courts.
- Policy brief launch: UNESCO’s new brief (digital copy shown via QR code) offers a blueprint for national AI‑for‑justice policies, audit frameworks, and procurement guidelines.
- Guidance on adaptation: While UNESCO guidelines are global, they must be contextualized for each jurisdiction (e.g., India’s linguistic diversity).
3. Audience Q & A
| Question Theme | Summary of Responses |
|---|---|
| Liability of AI | Panel stressed that liability first requires clear domestic guidelines and audit mechanisms. Current AI tools act as assistive rather than decision‑making agents, so direct liability on AI is limited. Future discussions may draw on intermediary‑liability models. |
| “Law for AI” – standards | UNESCO’s Recommendation on the Ethics of AI (2021) provides a global framework; nations should adapt, not adopt verbatim, to reflect local legal traditions. |
| Point‑solution use‑cases (QC of critical documents) | Arghya warned that generative AI on sensitive documents is risky. Safer uses include transcription and document navigation (search‑and‑highlight) rather than full‑text generation. |
| Multilingual implementation | Discussed a human‑in‑the‑loop approach: AI translations are only accepted after ≥80 % expert preference. Projects in Karnataka show high acceptance when legal professionals validate outputs. Capacity‑building for translators remains a bottleneck. |
| Ensuring guidelines are followed | Suggested establishing court‑level data‑protection offices to monitor AI deployment, conduct periodic third‑party audits, and create feedback loops for grievance redressal. |
| Detecting bias in individual cases | Judges and lawyers must stay vigilant; systemic monitoring (pattern detection) is needed because individual bias may be invisible. Third‑party audits help surface hidden harms. |
| Protecting judicial creativity & “human touch” | Panel agreed AI should augment not replace judicial reasoning. Judges may cite AI usage to increase transparency, but the expressive, narrative style of judgments must be preserved. |
| Generational divide & employment | Emphasised that just as calculators didn’t eliminate mathematicians, AI will reshape but not eradicate legal jobs. Curriculum updates and continuous up‑skilling are essential. |
| Should AI become mandatory? | Kerala already mandates Adalat AI transcription in every courtroom. For broader mandates, the panel urged robust impact‑evaluation, infrastructure readiness, and demonstrable fairness gains before a compulsory rollout. |
4. Closing Remarks
- ADG thanked participants, re‑emphasised UNESCO’s five C’s (Convene, Converse, Comprehend, Commit, Collaborate) plus Capacity‑building.
- UNESCO’s policy brief, training resources, and ethical AI recommendations were presented (QR code shown).
- The panel reiterated that AI can transform justice delivery but must not alter the core values of the judiciary.
Key Takeaways
- Risk‑based AI integration is essential: administrative use (scheduling, transcription) is lower risk; substantive support (research, translation) carries higher risk and needs stronger audits.
- Data governance is a prerequisite; the judiciary must protect sensitive personal data before sharing with vendors or researchers.
- Transparent public procurement and regular external audits are critical to maintain public trust in AI‑enabled courts.
- Multilingual AI must be built with a human‑in‑the‑loop validation process; India’s linguistic diversity makes a one‑size‑fits‑all solution impossible.
- Legal education must evolve to include AI literacy, dedicated law‑technology courses, and clinical projects that bridge theory and practice.
- Adalat AI’s practical tools (WhatsApp chatbot, real‑time transcription) demonstrate how AI can improve access to justice for marginalized communities while respecting fairness and non‑discrimination.
- UNESCO’s global findings: 90 % of judges worldwide use AI with no training; UNESCO has responded with MOOCs, a new policy brief, and a recommendation on AI ethics that serves as a blueprint for national policy.
- Liability and accountability hinge on establishing clear domestic guidelines, audit regimes, and transparent procurement; AI remains an assistive tool, not a decision‑maker.
- Audience concerns—from AI liability to preserving judicial creativity—highlight the need for ongoing stakeholder dialogue, monitoring, and evidence‑based impact assessments.
- Mandating AI may be appropriate for low‑risk tasks (e.g., transcription, already required in Kerala) but broader compulsory adoption requires demonstrable fairness gains, robust infrastructure, and systematic evaluation.
Prepared from the verbatim transcript of the “Equipping the Judiciary to Harness the Power of AI” session at the AI + Justice conference, Delhi (2026). All speaker attributions follow the provided speaker list and transcript cues.
See Also: