From Co-Design to Courtroom: Building a Fair Trial Adviser for Justice Systems

Abstract

The panel examined the growing backlogs in civil‑ and criminal‑justice systems (e.g., 80 million pending cases in Brazil, 50 million in India) and the widening gap between the complexity of fair‑trial guarantees and the time‑pressed capacities of judges and defence lawyers. The speakers introduced the Fair Trial Adviser (FTA) – a retrieval‑augmented generation (RAG) chatbot that delivers plain‑language answers, complete with authoritative citations, to questions about the right to a fair trial. The session traced the tool’s conception from a pre‑Summit hackathon, presented a live demonstration, reported on user feedback and ethical safeguards, and outlined next steps such as domestic pilots, multilingual support and open‑source release. An extensive Q & A covered practical, technical, and governance concerns raised by judges, lawyers, technologists and civil‑society participants.

Detailed Summary

  • Backlog statistics:
    • Brazil: ~80 million pending cases (average judge decides 11 cases/day).
    • India: ~50 million pending cases, including 180 000 cases older than 30 years.
  • Complexity of the fair‑trial right:
    • The right to a fair trial is broken down into 13 component rights.
    • Those components have been interpreted in 28 000+ decisions by 25 international human‑rights bodies.
  • Practical bottleneck: Judges and defence lawyers cannot feasibly consult a 1 000‑page textbook or sift through tens of thousands of decisions while handling heavy caseloads.

“If judges are overwhelmed and lawyers are overstretched, isn’t this exactly the kind of problem that AI was designed to solve?”Daisy Peterson

2. Why Existing AI Tools Fall Short

  • Commercial court‑AI tools are often:
    • Developed by private firms,
    • Not available in local languages,
    • Deployed without mandatory AI training for judges.
  • UNESCO survey (96 countries):
    • 44 % of judicial actors already use AI tools,
    • Only 9 % receive formal training on them.
  • Risks identified: Technology is outpacing safeguards, creating a “move‑fast‑break‑things” environment.

3. Introducing the Fair Trial Adviser (FTA)

3.1 What the FTA Is

  • An AI‑powered expert system that answers practical fair‑trial questions in plain language.
  • Legal foundation: Trained on two Oxford‑published books—The Right to a Fair Trial in International Law and Freedom of Speech in International Law—that synthesize thousands of decisions by component.
  • Target audience: Initially judges and lawyers; later the tool may support frontline advisors and civil‑society users.

3.2 Technical Architecture

  • Retrieval‑Augmented Generation (RAG):
    • The LLM never generates answers solely from its pre‑training data.
    • Instead it retrieves relevant passages from a pre‑approved legal dataset (the books and the underlying case law).
    • The retrieved text is then used to ground the generated response, dramatically reducing hallucinations.
  • Partnerships: Developed jointly with Microsoft’s AI for Good Lab and the Office of Responsible AI (technical and governance partners).

“The tool does not generate from generic internet data; it pulls from a curated legal corpus, ensuring traceability and accountability.”Wassim Hamidouche

3.3 Demonstration of a Typical Use‑Case

  • Question example: “The defendant has fallen ill and left the courtroom. Can the trial continue with counsel present?”
  • Chatbot response: Provides a structured answer, cites specific international decisions, and may ask a clarifying follow‑up before finalising the recommendation.
  • Other showcased queries:
    • Whether a criminal trial can be terminated early after one day of hearing.
    • How AI‑generated evidence must be handled under fair‑trial guarantees.

4. Co‑Design Hackathon (January, University of Oxford)

4.1 Hackathon Structure

  • Duration: 1.5 days, pre‑Summit event.
  • Participants: Judges, magistrates, lawyers, technologists, academics, civil‑society representatives, and students.
  • Agenda:
    1. Introduction to fair‑trial rights and the prototype’s technical design.
    2. Hands‑on sessions split into three challenge streams:
      • User Experience (UX).
      • Ethical & Transparency Frameworks.
      • Legal Knowledge Curation & Integration.

4.2 Core Objectives

  1. Clarify the tool’s core purpose & primary user group (e.g., judges first).
  2. Embed ethical safeguards from the start (transparency, accountability, avoidance of over‑reliance).
  3. Refine the legal knowledge base (decide which cases, treaties, and statutes to prioritise).

4.3 Participant Feedback – Strengths

  • Time‑saving:  “Significant reduction in legal‑research time.”
  • Clear, structured guidance:  “Answers are well‑formatted and immediately actionable.”

4.4 Participant Feedback – Areas for Improvement

CategorySpecific Issues Raised
UsabilityEnter‑key submission, footnote formatting, clearer follow‑up prompts.
Scope & AudienceNeed to articulate whether the tool is a advisor vs. a legal‑information portal; prioritize judges before expanding to other users.
BrandingDebate over naming (e.g., “Legal Information Tool” vs. “Judicial Compass”).
Legal CoverageKeep the scope narrow initially; expand gradually across jurisdictions.
Domestic IntegrationSuggest piloting in a single jurisdiction, aligning with national codes & procedural rules.
Accessibility & InclusionAdd language filters, multi‑language support, mobile optimisation, voice input, export functions.
Risk of Over‑RelianceGuard against judges treating the tool as a decision‑maker rather than an aid.

4.5 Quantitative Insight – Citation & Transparency

  • Most‑requested improvement: Citation quality – participants wanted clickable, complete, primary‑source citations rather than references limited to page numbers in the book.
  • Transparency concerns: Model details, data‑curation processes, update frequency, exclusion criteria.

5. Ethical Safeguards & Governance

  1. Narrow Core Function: Limit the assistant to answering fair‑trial‑related questions.
  2. Retrieval‑Based Architecture: Guarantees that every answer is linked to a verifiable source.
  3. Independent Validation: Regular audits by legal experts to confirm accuracy and relevance.
  4. Explicit Limitation Statements: Prominent notices about the tool’s advisory nature and its inability to replace judicial discretion.
  5. Data‑Privacy & Retention Policies: Ensure that user queries (often privileged) are not stored long‑term or shared.
  6. Version‑Control & Update Stamps: Display “last updated” timestamps so users know the currency of the legal material.

“We must move from ‘Can AI answer?’ to ‘Under what conditions should AI answer?’”Lodovica Raparelli

6. Roadmap & Next Steps

MilestoneDetails
Technical RefinementOngoing collaboration with Microsoft AI for Good Lab to improve UI, follow‑up handling, export functions, and visual design.
Usability EnhancementsImplement keyboard shortcuts, improve citation linking, refine follow‑up dialogue.
Domestic PilotsTarget a jurisdiction (unspecified) to integrate national statutes, procedural rules, and local case law.
Regional & International ExtensionsTest in bodies such as the African Court on Human and Peoples’ Rights; explore domain‑specific versions (e.g., Freedom‑of‑Speech Advisor).
Multilingual ExpansionTranslate the interface and underlying corpus into additional languages; enable voice‑to‑text input.
Open‑Source ReleaseMake the prototype, documentation, and API publicly available; provide guidance for local deployments (online or offline).
Further HackathonsPlan future co‑design events to iterate on feedback and to involve new stakeholder groups.
Training & MOOCsContribute to UNESCO’s upcoming MOOCs on judicial AI, embedding responsible‑use guidelines.

7. Audience Q & A – Key Themes

Questioner (role)Main QueryRepresentative Answer / Insight
Lawyer (UK focus)Whether the tool can be sector‑specific (e.g., family law)Tool currently covers any proceeding that triggers fair‑trial guarantees; sector‑specific granularity will be explored in domestic versions.
Design Intelligence (tech)Specific citation challenges (how to link to primary case law)Current version cites the page in the book; next version will provide direct links to the original judgments.
Audience (India)Comparison with a generic LLM (ChatGPT)Generic LLMs lack access to the licensed book and cannot guarantee traceable citations; FTA’s RAG approach mitigates hallucinations and ensures legal grounding.
Forensic InvestigatorAvailability of interaction logs for post‑hoc analysisLogging is currently minimal to preserve confidentiality; future versions may include auditable logs under strict privacy controls.
Audience (Legal practitioner)Whether the tool supplies the mens rea behind cited casesAt present, the answer reflects the book’s commentary; the team is working to surface the original reasoning from primary judgments.
Technical audienceOffline deployment feasibilityOnline service is the default; an offline variant is technically possible but would require sufficient local compute resources and a packaged model.
Commercial inquiryPricing and licensing modelThe tool is being released as open‑source, free to use; Microsoft provides technical assistance but no commercial licence is required.
Scaling concernsBarriers across regions (digital infrastructure, data availability)Challenges include varying levels of court digitisation, language support, and access to curated legal datasets; solutions will be jurisdiction‑specific.
Misuse preventionRisk of the tool being used to “game” fair‑trial guaranteesPotential mitigation: restrict question types, embed refusal responses for adversarial prompts, and maintain human‑in‑the‑loop oversight.
Presentation & reportingNeed for a graphical/reporting layer (e.g., evidentiary forms)Future versions could integrate multimodal inputs (images, audio) and generate structured legal documents; currently focus remains on text‑based advice.
Model flexibilityAbility to swap the underlying LLM (OpenAI, Azure, open‑weights)Architecture is model‑agnostic; users can point the RAG pipeline at any supported LLM via Azure Fundamentals or other APIs, provided they have appropriate licences.
Tokenisation & API detailsWho handles tokenisation when switching models?Tokenisation is handled internally by the chosen LLM; the FTA platform does not modify it.

Overall, the Q & A reinforced three cross‑cutting concerns: trustworthiness of citations, proper governance to avoid over‑reliance, and adaptability to local legal ecosystems.

Key Takeaways

  • Massive case backlogs (tens of millions) and the complexity of fair‑trial rights create a clear need for AI‑assisted legal research.
  • The Fair Trial Adviser is a retrieval‑augmented generation chatbot that answers fair‑trial questions using a curated, citation‑rich legal corpus rather than unconstrained language‑model knowledge.
  • Co‑design via a pre‑Summit hackathon ensured that judges, lawyers, technologists and civil‑society actors shaped the tool’s UX, ethical framework, and legal knowledge architecture from the outset.
  • User feedback identified the tool’s biggest strengths (time‑saving, clear guidance) and concrete improvement areas (citation linking, UI tweaks, clearer purpose definition, multilingual support).
  • Ethical safeguards—narrow scope, transparent sourcing, independent audits, explicit advisory warnings, and robust data‑privacy—are baked into the roadmap.
  • Next steps include: refining the prototype, piloting domestically with integrated national law, adding multilingual and multimodal capabilities, releasing the system as open‑source, and delivering training/MOOCs on responsible judicial AI.
  • Audience concerns (risk of misuse, need for offline operation, model flexibility, and the importance of robust citations) were acknowledged and mapped to concrete development priorities.
  • The project exemplifies human‑centred, collaborative AI for justice: technology is built with jurists, not for them, and the ultimate aim is to strengthen fair‑trial guarantees while preserving judicial autonomy.

See Also: