Realizing the Health Value of AI - Safely

Abstract

The session opened with a live demonstration of Eka.Care’s end‑to‑end AI‑driven health‑care platform, illustrating how a senior patient can book appointments, share medical history, receive AI‑generated summaries, and have doctor‑patient interactions recorded and transcribed automatically. The demo highlighted multilingual support, real‑time safety alerts, and seamless integration with India’s ABHA health‑ID and patient‑held records (PHR). After the demonstration, the moderator invited a panel of regulators, funders and researchers to discuss the broader challenges of delivering safe, trustworthy AI in health – including regulatory speed versus safety, the need for rigorous real‑world evidence, funding models that balance rapid innovation with risk mitigation, data‑privacy safeguards, operational decision‑support at the front line, and the design of reassuring AI agents for high‑anxiety contexts such as maternal‑infant care. The session concluded with the announcement of a new $60 million joint funding call for evidence generation on AI in health, followed by a brief Q&A and closing remarks about future collaboration.

Detailed Summary

  • Moderator (Dr. Sindura Ganapathi) opened the session by inviting participants to recall their most recent doctor‑visit, using the anecdote to frame the problem of fragmented health information and time‑pressured clinicians.
  • She introduced the core challenges the health‑care system faces today:
    1. Fragmented information across appointments, vitals, and records.
    2. Cumbersome patient history collection that forces patients to juggle multiple files.
    3. Clinician time spent on documentation rather than on patient counseling.

2. Eka.Care Demonstration (Presented by Vikalp Sahni)

2.1 Patient Journey – “Neeti”

  • Profile: 65‑year‑old female with diabetes, digitally literate, already possessing an ABHA (Ayushman Bharat Health Account) identity and a personal health‑record (PHR) app.
  • AI‑enabled data ingestion – Neeti uploads photographs of her legacy paper records; the AI extracts and digitises the information, producing a concise health summary.

2.2 Interaction with the AI Assistant

  • Neeti initiates a multilingual chat with the AI (“med‑assist”).
  • She reports a fever and a foot wound in English, while the backend silently translates the conversation into her preferred local language.
  • The AI asks probing questions (wound location, swelling, odor) using prompt‑driven, context‑aware dialogues that are easy for a senior user to navigate.

2.3 Automated Triage & Appointment Scheduling

  • The system flags the case as high‑priority and immediately suggests two available doctors for 14 February.
  • Neeti selects a doctor via a simple tap or typed response; an appointment is created in real time.

2.4 Doctor‑Side EMR Augmentation

  • When Neeti arrives at the clinic, the physician views a next‑generation EMR screen that already contains:
    • Summarised history (including uncontrolled diabetes).
    • AI‑generated prompts highlighting missing data.
  • The doctor activates Eka Scribe, an audio‑based transcription engine that records the consult, converts speech to structured medical notes, and populates the EMR automatically.

2.5 Safety Alert & Decision Support

  • The doctor attempts to prescribe Amoxicillin; the AI cross‑checks Neeti’s allergy profile and issues an alert (“Patient allergic to Amoxicillin”).
  • The doctor quickly switches to Clindamycin; the system updates the prescription and re‑generates patient‑facing instructions.

2.6 Multilingual Output & PHR Sync

  • All notes, drug instructions, and follow‑up advice are instantly translated into Neeti’s local language and pushed to her PHR app, creating a new node for future visits.

2.7 Highlighted Benefits

  • Consolidation of fragmented data into a single, patient‑controlled repository.
  • Reduced clinician documentation burden, freeing time for counseling.
  • Safety net through AI‑driven allergy checks and dosage verification.

2.8 Stated Challenges

  • Scalability across languages – building robust multilingual models for India’s linguistic diversity.
  • Data generation & model verification – need for large, high‑quality training sets that are compliant with regulatory standards.
  • Evaluation & oversight – who will audit the AI’s performance at scale?

Key Insight – Vikalp: “Technology is only about 10 % of AI adoption; the remaining 90 % is about people, ecosystems, and trustworthy processes.”

3. Transition to Panel Discussion

  • Moderator thanked Vikalp, invited the panelists onto the stage, and introduced each by name and affiliation:
    1. Dr. Richard Rukwata (Medicines Control Authority of Zimbabwe – regulator).
    2. Prof. Charlotte Watts (Wellcome Trust – funder/evidence generator).
    3. Dr. Monika Sharma (Novo Nordisk Foundation – funder).
    4. Dr. Trevor Mondel (Bill & Melinda Gates Foundation – funder).

4. Panel Discussion – Themes & Exchanges

4.1 Human Feelings & Hype vs. Fear

  • Trevor Mondel shared his emotional takeaway from the first two days of the summit: the atmosphere was “energising” despite the “depressing” early impressions of AI hype. He emphasized that the conference is moving from hyper‑sell / hyper‑fear toward meaningful, nuanced conversations about responsible AI deployment.

4.2 Ecosystem‑Centric View (Speaker “Sindura”)

  • Emphasised that technology alone is insufficient; the real work lies in shaping ecosystems, defining human‑in‑the‑loop roles, and ensuring that AI augments rather than replaces clinicians.

4.3 Regulatory Perspective – Dr. Richard Rukwata

  • Dual pressures on regulators:
    • Speed – industry demands rapid approvals to avoid stifling innovation.
    • Safety – regulators must be accountable when adverse events occur.
  • Described a grant from the Gates Foundation aimed at using AI to screen marketing‑authorisation applications, hoping to create “neutral” AI tools that speed up review without bias.
  • Highlighted that computers are impartial and can serve as a common language between industry and regulators, potentially reducing friction.

4.4 Funding & Evidence Generation – Prof. Charlotte Watts (Wellcome)

  • Wellcome’s AI‑health portfolio emphasises rigorous real‑world evidence: not just laboratory performance, but how AI tools operate within health‑system workflows, affect different socioeconomic groups, and generate unintended pressures.
  • Stressed the need for cost‑effectiveness analyses and feedback loops from implementation to inform policy.

4.5 Funding & Risk Management – Dr. Monika Sharma (Novo Nordisk Foundation)

  • Described the tension between “speed” and “reflection”: moving too fast can cause errors that later undermine trust; moving too slowly delays lifesaving interventions.
  • Announced a new $60 million joint funding call (Wellcome, Gates, Novo Nordisk) aimed at generating real‑world evidence on AI in health for low‑ and middle‑income countries.
  • The call will fund projects that evaluate clinical decision‑support tools, operational AI for frontline health, and cost‑effectiveness. A QR code was displayed for details.

4.6 Funding Coordination – Dr. Trevor Mondel

  • Highlighted that the three foundations are aligning criteria, timelines, and evaluation standards to avoid “patchwork” expectations for researchers.
  • Framed the initiative as establishing shared standards and infrastructure for long‑term, scalable AI adoption in health.

4.7 Audience Q&A

QuestionRespondent(s)Summary of Answer
Data‑privacy‑by‑design (audience)Vikalp Sahni (Eka.Care)Follows HIPAA, India’s DPDP Act, and national health‑information exchange guidelines (e.g., NHA). Uses continuous compliance certifications, end‑to‑end encryption, and policy‑level safeguards.
Charlotte WattsEmphasised that funded evaluations will require strict anonymity, ethical clearances, and adherence to high‑quality research standards.
Operational decision‑support for frontline (TB geospatial AI)Charlotte WattsInterested in primary‑care‑level interventions that improve reach to underserved communities. Emphasised need for scalable, affordable, and evidence‑backed solutions.
Trevor MondelNoted that global funding agencies (e.g., Global Fund, Gavi) have limited resources; AI‑driven risk‑targeting can optimise scarce resources and is a priority for investment.
Designing reassuring AI agents for maternal‑infant careTrevor Mondel (with input from Vikalp)Recommend humour‑in‑the‑loop for user comfort, multi‑agent architectures with a grounding agent to keep conversations within safe boundaries, and a strong human‑in‑the‑loop team (10‑member medical team) to monitor outputs.
Future of the summit (next year in Geneva)Various panelists- Vikalp envisions a transparent, fully explainable patient‑facing agent that never makes contraindication errors.
- Charlotte wants more operational partners to share concrete implementation lessons.
- Richard calls for enhanced regulator‑industry collaboration.
- Trevor stresses keeping the doctor’s final judgment central.

5. Closing Remarks

  • Moderator thanked the panel and audience, distributed a souvenir from the Indian side, and reminded participants that the next summit will be in Geneva.
  • Final sentiment from the group: optimism tempered with responsibility – AI can unlock health impact, provided the ecosystem, evidence base, and regulatory safeguards evolve together.

Key Takeaways

  • End‑to‑end AI platform (Eka.Care) can digitise fragmented records, schedule appointments, provide multilingual conversational triage, and auto‑generate safe, verified clinical notes while alerting for drug allergies.
  • Human‑in‑the‑loop remains essential; AI should support, not replace, clinicians, especially in high‑risk contexts (e.g., maternal‑infant care).
  • Regulators face a “speed‑vs‑safety” paradox; AI‑driven tools for application screening can help create a neutral, shared evidence base that satisfies both industry and oversight bodies.
  • Funders (Wellcome, Gates, Novo Nordisk) are committing $60 million to a joint call focused on real‑world evidence for AI in health, prioritising low‑ and middle‑income settings, cost‑effectiveness, and operational feasibility.
  • Rigorous evaluation standards (ethical clearances, anonymity, cost‑benefit analysis) are non‑negotiable for funded projects; fragmented funding criteria will be harmonised to reduce researcher burden.
  • Data privacy must be addressed on two fronts: policy compliance (HIPAA, DPDP, NHA guidelines) and technical safeguards (encryption, federated learning, synthetic data).
  • Frontline decision‑support (e.g., geospatial AI for TB case‑finding) is of high interest, but must demonstrate system integration, affordability, and measurable health impact.
  • Multi‑agent AI architectures with a grounding agent and human oversight are recommended for complex, anxiety‑inducing health interactions.
  • Future collaboration should move from hype to operational learning, with more industry‑regulator partnerships, transparent model explanations, and continued centrality of the physician’s judgment.

Prepared from the verbatim transcript of the “Realizing the Health Value of AI – Safely” session at the India AI Impact Summit 2026.

See Also: