Towards Global Cooperation for Equitable AI in Healthcare

Abstract

The panel explored how artificial intelligence can be deployed in health systems without reproducing or worsening existing inequities. Starting from a set of eight “tenets” for inclusive AI co‑designed by researchers in Australia and India, the discussion moved through practical examples (e.g., gender‑biased TB‑screening models, smartphone‑based corneal‑topography, sign‑language gaming apps), legal‑compliance pathways (Microsoft’s CELA model), the emerging UK “AI‑assurance” ecosystem, and WHO‑led efforts to align global regulatory frameworks. Audience questions probed data‑sharing mechanisms, federated‑learning approaches, and ways to convince frontline health workers that AI will augment—not replace—their work.

Detailed Summary

  • Moderator (Aman Taneja) opened the session by handing the floor to Shambhavi (representing Ikigai Law) who outlined the motivation for the panel.
  • Ikigai Law’s three‑year research on responsible AI highlighted a gap: abundant policy principles and risk literature, but few actionable playbooks for inclusive AI in health care.
  • To fill this gap they secured a joint grant with the University of Melbourne and Narsha University of Law to study co‑design/participatory design.
  • The outcome: eight “tenets” that can be consulted at any stage of the AI lifecycle (design, development, deployment, monitoring). The tenets are not a checklist; they are flexible prompts such as “Who are you building this AI for?” and “What are the intermediaries in the delivery chain?”

2. Introduction of Panelists

Aman introduced each participant, noting their institutional perspective (clinical, regulatory, industry, global health).

3. From Theory to Practice – Inclusive Design in India

3.1. Dr Krithika Rangarajan (AIIMS)

  • Emphasised patient‑first thinking: start by asking “what problem are clinicians and patients actually facing today?” rather than “what technology can we build?”
  • Cited examples of ASHA and Anganwadi workers refusing tools because they threatened professional respect or generated excessive alerts.
  • Stressed the need to adapt to local contexts (rural vs. tertiary‑care settings, internet latency, power reliability).
  • Described a sign‑language gaming app for the Deaf‑hard‑of‑hearing community, co‑designed with users to gather both engagement data and training data for future sign‑language models.
  • Highlighted the “tenets” in action – iteratively revisiting who the end‑users are, checking cultural relevance, and tailoring risk assessments.

3.2. Practical Trade‑off Example (by Aman)

  • Recalled a collaboration with Wadhwani AI on a TB‑cough‑screening model that initially performed better for men than women. The team re‑balanced the model, accepting a modest overall accuracy loss to achieve gender fairness.

3.3. Key Insight

  • The most effective AI solutions often arise from non‑technical interventions (workflow redesign, community engagement) rather than defaulting to a new algorithm.
  • Mr Mohit Jain explained Microsoft’s CELA (Corporate, External, Legal & Assurance) review process: any AI system involving human interaction must satisfy the stricter of the two jurisdictions (U.S. and India).
  • With India’s new DPDP (Data Protection and Personal Data) law, CELA now cross‑references GDPR, DPDP, and other market‑specific regimes, applying the harshest requirement when there is conflict.
  • This “dual‑jurisdiction” model ensures that AI products deployed in India meet global‑standard privacy and safety benchmarks.

5. AI Assurance – UK (techUK)

  • Ms Tess Buckley introduced the UK concept of AI Assurance – the operationalisation of responsible‑AI principles.
  • Illustrated with Model Cards: a “nutrition‑label” style document describing data provenance, performance metrics, and intended use.
  • Summarised the Trusted Third‑Party Roadmap (four pillars):
    1. Risk‑based assurance techniques (stress‑testing, auditing).
    2. Information & data access (transparent provenance).
    3. Skills & competency framework for AI auditors and responsible‑AI leads.
    4. Funding for innovative assurance tools.
  • Noted strong uptake in public‑sector domains with high fiduciary duties (healthcare, finance, justice).

6. Global Policy Alignment – WHO

  • Ms Kanika Kalra described a landscape analysis covering 95 % of world populations, mapping AI‑related regulations (U.S., EU AI Act, state‑ vs. federal‑level laws).
  • Emphasised the need for “alignment” rather than “harmonisation”: shared definitions for risk‑classification, technology registries, post‑market surveillance, and sustainability criteria.
  • Warned that fragmented regulations impede cross‑border AI deployment, especially in health where solutions are inherently global.

7. Cross‑Border Technical Collaboration

  • Dr Krithika responded to the alignment theme by describing federated‑learning pilots: AI models travel to local data centres, learn from site‑specific data without raw data leaving the country.
  • She also highlighted language and cultural variation in symptom description (e.g., “right upper quadrant pain” vs. lay expressions) and the necessity to embed localized vocabularies into models.

8. Research Impact & Open‑Source Innovation (Microsoft)

  • Mohit Jain recounted the development of a smartphone‑based corneal topographer in partnership with Sankara Eye Hospital (Bangalore).
  • After publishing a low‑citation paper, the open‑source hardware/software attracted interest from clinicians across Afghanistan, Pakistan, Morocco, Stanford, UCSF, Johns Hopkins, etc.
  • This led to a commercial medical device (Insta Casey) that achieved regulatory clearance in India and the U.S., illustrating how open‑source research can catalyse global adoption.

9. Audience Q&A (selected themes)

QuestionerMain TopicSummary of Responses
Foundation lead (hearing‑impairment)Clarification on a “hearing app”Krithika clarified the project was a sign‑language game for data collection, not a diagnostic hearing‑test.
Founder (Health‑Tech)Data‑sharing for federated models; lack of hospital collaborationKrithika listed three Indian data‑resource initiatives: IBIA (Imaging Biobank), ICMR‑MIDAS, and AI‑KOSH; announced upcoming Kaggle‑style contests and notes on the CMIE bio‑incubator to help startups access data responsibly.
Audience (ASHA‑worker AI tool)Resistance from frontline health workersKrithika advised making AI tools “stupid to ignore” – i.e., extremely easy to use and demonstrably helpful, mirroring the adoption success of Digi‑Yatra airport check‑in.
GeneralPractical steps for WHO‑led alignmentKanika outlined taxonomies, risk‑based classification, post‑market surveillance as minimal baseline standards that WHO is pushing for global uptake.

10. Closing Remarks

  • Aman thanked participants, noted that a short video message from Dr Mahima Kalla (University of Melbourne) would play after the session, summarising the Valeditron simulation lab that creates realistic patient‑clinician environments to test AI before real‑world rollout.
  • The moderator reminded the audience that the discussion was recorded for later viewing and that further questions could be followed up via the conference’s online platform.

Key Takeaways

  • Tenets over check‑lists: Inclusive AI requires flexible, iterative prompts that can be revisited at any lifecycle stage (design, deployment, monitoring).
  • Co‑design is essential: Direct involvement of patients, frontline workers, and community advocates prevents unintended harms and improves adoption.
  • Legal dual‑jurisdiction compliance: Microsoft’s CELA model applies the stricter of relevant data‑protection regimes, ensuring global‑standard safety for AI deployed in India.
  • AI Assurance tools (e.g., Model Cards) translate high‑level ethical principles into concrete, auditable documentation.
  • Policy alignment, not harmonisation: Consistent taxonomies, risk classifications, and post‑market surveillance across countries are needed to avoid fragmentation of health‑AI ecosystems.
  • Federated learning & cultural localisation enable AI models to learn from diverse datasets without breaching data‑sovereignty or ignoring linguistic nuances.
  • Open‑source research can have outsized impact: Low‑citation papers may still seed globally adopted medical devices when code and hardware are openly shared.
  • Ease of use drives adoption: Tools must be demonstrably simpler than existing workflows (e.g., Digi‑Yatra analogy) to win over overburdened health workers.
  • India’s emerging data infrastructure (IBIA, MIDAS, AI‑KOSH) provides a foundation for responsible data sharing and benchmarking.
  • Cross‑border collaboration (UK‑India partnerships, WHO‑led alignment) is vital for scaling equitable AI solutions worldwide.

Prepared from the verbatim transcript of the “Towards Global Cooperation for Equitable AI in Healthcare” panel at the Delhi AI Impact Summit.

See Also: