Building AI for Bharat: From Innovation to Outcomes
Abstract
The panel explored how artificial‑intelligence solutions can move from prototype to real‑world impact for India’s diverse “Bharat” context. Participants shared concrete lessons from government‑run data‑exchange platforms, a legal‑tech transcription system, a mental‑health chatbot, and a large‑scale education‑tech nonprofit. Three guiding questions framed the debate: (1) what breaks when AI meets real users and entrenched systems; (2) how trust, safety and explainability must be baked in from day‑one; and (3) what systemic changes are needed across government, capital markets and builders to scale AI responsibly.
Detailed Summary
-
Moderator introduced the panel, listed the speakers, and set out three focal questions for the discussion:
- What breaks when AI meets real users and real systems?
- How are trust, safety and explainability designed from day one?
- What must change across government, capital, and builders for AI to scale responsibly?
-
A short video about Rocket Learning (an education‑tech nonprofit) was shown before the first round of discussion.
2. What Breaks When AI Meets Real Users
2.1 Data Availability & Quality – Sandeep Singhal (Telangana)
- Pilots across sectors (health, forestry, education, mental‑wellness) often stalled because high‑quality data were scarce.
- Example: an agri‑tech startup needed specific crop‑price data. Obtaining the right dataset took 3‑4 months of inter‑departmental requests, and the startup later rejected the data as unsuitable – six months lost.
- Solution: built the Telangana Agriculture Data Exchange Platform (2021) that aggregates and publishes sector‑wide datasets. By 2023 the platform hosted ≈ 1,100 datasets across health, education, agriculture, etc. The model was later expanded to a state‑wide data‑exchange covering all sectors.
- Takeaway: a centralised, government‑backed data infrastructure reduces time‑to‑data for startups and de‑risks AI pilots.
2.2 Product‑First, Research‑Backed Design – James Walsh (Agency Fund)
-
Rocket Learning reaches hundreds of thousands of Anganwadi workers and millions of parents via a WhatsApp‑based lesson‑plan product.
-
Key success factors:
- Product‑led approach – a fully engineered digital service rather than a loosely defined programme.
- Embedded rigorous research – launched a randomized controlled trial (RCT) within two years, partnered with top economists. Mid‑trial results showed limited impact; the team immediately A/B‑tested variants and iterated the product.
-
Future opportunities:
AI‑driven personalised coaching for Anganwadi workers (e.g., conversational agents that adapt advice to individual needs).
Exportability – the model could be adapted for Brazil, Nigeria and other large emerging markets.
2.3 Capital Constraints & Compute Costs – Prashant Prakash (ACT)
-
Pilot‑to‑scale gap is amplified for AI because compute is expensive and most pilots are not immediately monetisable.
-
Four challenges identified:
- Capital allocation – investors favour products with quick, clear revenue streams (e.g., test‑prep). Non‑monetisable AI solutions struggle to attract funding.
- Engineering for efficiency – AI models require edge compute or specialised hardware to keep operating costs low.
- Archaic data ecosystems – fragmented, state‑level data silos make national scaling difficult (e.g., health data that works in Telangana may not integrate elsewhere).
- Continuous monitoring – unlike traditional products, AI systems need real‑time evaluation, bias checks, and performance drift monitoring from day 1.
-
Donor education: donors must understand that up‑front compute spend is higher, and they need to be prepared to fund ongoing model maintenance rather than one‑off grants.
2.4 User‑Behaviour & Contextual Fit – Ramakant (VISA)
-
Original product (2017): English‑only mental‑health chatbot for direct‑to‑consumer use.
-
COVID‑19 pivot: added Hindi, then multilingual support, scaling to 100 countries and ≈ 1 billion conversations.
-
Bharat‑specific challenges:
- Access: rural girls often share a phone for only a few hours a day; devices are monitored by family.
- Behaviour: a pure digital solution saw low adoption in municipal schools.
-
Solution: a blended physical‑digital product – a work‑book with QR codes that link to an anonymous web‑chatbot. The work‑book can be distributed via schools (budgeted as “books”), while the chatbot provides private mental‑health support.
-
Strategic shift: positioned the offering as social‑emotional learning within the education budget, rather than a stand‑alone mental‑health app.
2.5 Legal‑Tech Integration – Arghya Bhattacharya / Utkarsh Saxena (Adalat AI)
-
Problem: Indian courts suffer from manual transcription, poor handwriting, and delayed case processing.
-
Pain‑killer approach: built an AI‑powered speech‑to‑text legal transcription tool (custom language models for Tamil, legal Tamil, etc.) to replace handwritten “fair‑copy” departments.
-
Security model:
- No third‑party APIs – all models run on in‑house compute.
- Decentralised encryption – each judge holds a unique key; decryption requires all judges’ keys.
- Domain‑specific language handling – legal jargon and procedural rules required dedicated training data.
-
Outcome: after initial adoption, Kerala mandated the tool for every courtroom, creating a state‑wide precedent.
3. Building Trust, Safety & Explainability from Day One
-
Government role (Rama Devi Lanka, NITI Aayog):
- Compute subsidies – the government can offset cloud/edge compute costs for startups, lowering the barrier for AI pilots.
- Infrastructure & market access – state ministries act as first customers (e.g., health‑wellness program in Telangana, agritech data platform).
- Ground‑truth insights – field teams uncover “nuances” (e.g., farmer credit relationships, school‑book procurement channels) that help tailor AI solutions.
-
Adalat AI’s security stack: three‑pronged approach (no external API, end‑to‑end encryption, domain‑specific models) to protect sensitive judicial data and comply with confidentiality requirements.
-
Agency Fund’s evaluation framework:
- Technical AI model evaluation (bias, robustness).
- Impact evaluation through randomized trials to prove cost‑effectiveness to future payers (typically governments).
-
Continuous monitoring: Prashant emphasised that AI systems must be observable—metrics, dashboards, and automated alerts are essential to maintain trust and safety over time.
4. Systemic Changes Needed to Scale AI Responsibly
4.1 Compute & Edge Infrastructure
- Edge AI computers (₹5,000‑₹15,000 devices) deployed in Karnataka schools for personalised curriculum generation; similar hardware could be rolled out to courtrooms.
- Government AI stack: need for differential pricing, optimisation of public‑cloud contracts, and standardised APIs that enable state‑level AI teams to build on shared services.
4.2 Talent & Skills Development
- NITI Aayog’s AI‑for‑Jobs roadmap – mapping future job profiles, recommending a national AI talent mission and a standing AI committee to coordinate skilling across ministries.
- Education pipeline: introduce AI concepts early (school level), expand to undergrad/PG programs, then fellowships for specialised sectors (MSMEs, agriculture).
4.3 Funding Models & Donor Strategies
- Agency Fund: focus on early‑stage nonprofit capacity building, helping them design monetisable pilots, run RCTs, and connect to government payers.
- ACT (Prashant): need to re‑educate donors about the higher compute costs and the necessity of long‑term maintenance funding.
- Philanthropy post‑AI (James Walsh): shift from cost‑ineffective “hairy” problems to tackling high‑complexity, high‑impact challenges using AI‑driven cost reductions.
4.4 Policy & Regulation
- Legal & financial AI tools raise questions about unauthorised advice, professional malpractice, and regulatory lag.
- Policing & cyber‑security: the Ministry of Home Affairs (MHA) is creating awareness budgets, establishing cyber cells, and recognising voice‑first interfaces as a means to broaden accessibility.
5. Audience Q&A Highlights
| Question | Main Points from Panelists |
|---|---|
| Curriculum changes for AI‑ready workforce | NITI Aayog roadmap; AI introduced at school level; National Skill Development Centres already offering AI courses; MSME sector needs a staged technology‑adoption path. |
| Police training for cyber‑threats in regional languages | MHA emphasises awareness, budget for training, and voice‑enabled tools to lower language barriers. |
| AI for finance/legal document summarisation | Legal‑tech example: transcription is hard; next layer requires subject‑matter experts to annotate data; regulatory safeguards needed for AI‑generated advice. |
| Philanthropic principles pre‑ vs post‑AI | Post‑AI focus on cost‑effectiveness, scaling “hairy” problems, and leveraging technology to deliver public‑sector outcomes. |
Key Takeaways
- Data is the foundation – without timely, high‑quality, interoperable datasets (e.g., Telangana’s data‑exchange platform) AI pilots cannot move beyond proof‑of‑concept.
- Compute remains a critical bottleneck; public subsidies, edge‑device deployments, and shared government AI stacks are essential to keep operating costs sustainable.
- Product‑first design coupled with rigorous evaluation (RCTs, A/B testing) dramatically improves adoption, as shown by Rocket Learning.
- Government can de‑risk AI deployment by (1) providing compute subsidies, (2) acting as first customers, and (3) facilitating data‑sharing ecosystems that respect local nuances.
- Security and privacy must be baked in from day 1—Adalat AI’s in‑house models and multi‑key encryption illustrate a viable approach for highly sensitive domains.
- Continuous monitoring, explainability, and bias‑auditing are non‑negotiable for maintaining trust once AI solutions reach scale.
- Human‑centered implementation matters: blended physical‑digital formats (VISA’s QR‑code workbooks) and local language support are crucial for reaching underserved users.
- Philanthropic funding models need to evolve: donors must understand higher compute costs, fund long‑term model maintenance, and push for cost‑effective, scalable impact rather than short‑term pilots.
- Building AI talent across the public sector (national AI talent mission, AI‑for‑jobs roadmap) is required to transition from government‑centric IT teams to government‑centric AI teams.
- Regulatory frameworks must keep pace with AI‑driven legal and financial tools to ensure safe, accountable advice and to avoid unintended malpractice.
Prepared from the verbatim transcript of the “Building AI for Bharat” panel held in Delhi, using the provided session description and speaker list.