Catalyzing Global Investment for Equitable and Responsible AI in Health

Abstract

The round‑table gathered senior policy‑makers, development partners and financing institutions to explore how global investment can be aligned to promote equitable, responsible AI for health. Speakers examined current financing gaps, highlighted concrete UK‑led investment pilots (remote surgery, AI‑driven documentation, robotic automation), stressed the need for diverse data, workforce preparedness, and robust governance, and called for coordinated donor strategies that centre country‑led priorities and patient voices.

Detailed Summary

  • Catharina Boehma (WHO South‑East Asia) opened the session with a brief welcome, noting the rapid maturation of AI in health and the urgency of moving from “possibility” to “implementation.”
  • Alain Labrique (WHO) reinforced the WHO’s commitment to supporting member states in building trustworthy AI ecosystems, stressing that sustainable investment must be coupled with strong governance and equity safeguards.

2. Implementation‑Focused Perspective

  • Prof. Prokar Dasgupta (King’s College London) introduced himself as a practising surgeon and the chair of Responsible AI UK, a UK‑government‑backed initiative that funds AI research and deployment across the UK, India and sub‑Saharan Africa.
  • He argued that implementation, not just invention, is the crucial lever for impact. His central message: “If we do not embed AI into the everyday practice of clinicians and health systems, the technology will never reach the patients who need it most.”

3. Illustrative Investment Pilots

a. AI‑Enhanced Clinical Documentation

  • Responsible AI UK funded an evaluation of MD&T AI, a tool that automatically generates operative notes.
  • Result: Reduced average operating‑theatre documentation time by roughly 30 %, freeing staff for patient‑care activities.

b. Data Diversity & Equity

  • Dasgupta highlighted a personal anecdote: a watch‑based heart‑event predictor would have saved his 87‑year‑old mother’s life, but the algorithm failed because training data were not ethnically diverse.
  • Key Insight: “Without diverse datasets, AI can exacerbate, rather than alleviate, health inequities.”

c. Telesurgery 2.0

  • Citing a recent British Medical Journal article (co‑authored by his team), he described a telesurgery system capable of operating over 2,500 km with a latency of ≤ 60 ms, giving the surgeon the sensation of being in the same room as the patient.
  • Potential Impact: Could address the unmet surgical needs of the estimated 5 billion global patients lacking access to safe surgery.

d. Robotic Autonomy & Automation

  • Dasgupta outlined the autonomy spectrum (Levels 0‑5) for medical robots, noting that Level 3 (conditional automation) is currently the most common.
  • He presented a prostate‑targeting robot that maps the organ via ultrasound, then creates a precise channel with a single button press, eliminating the need for repeated manual adjustments.
  • A second project demonstrated a robotic system for bladder removal with 100 % accuracy in pre‑clinical (porcine) trials.
  • When he asked the audience to “raise a hand if you would allow a fully‑automated machine to operate on you,” only one hand was raised, underscoring lingering patient‑trust concerns.

4. Cross‑Cutting Themes

a. Patient & Community Involvement

  • Dasgupta stressed that “if we do not bring patients with us, all this investment will fail.” He called for early, meaningful engagement with patient groups, especially in the Global South, to co‑design AI solutions.

b. Skills Development & Curriculum Integration

  • He warned that almost none of the medical or nursing schools represented in the room currently teach AI.
  • Recommendation: Integrate AI fundamentals, ethics, and data‑literacy into health‑professional curricula to build a future workforce capable of safe AI deployment.

c. Governance, Regulation & Trust

  • Alain Labrique (WHO) and Ken‑Ichiro Natsume (WIPO) reiterated that predictability in legal and regulatory frameworks is the “currency of trust” that unlocks private‑sector investment.
  • They highlighted the need for transparent evidence‑generation pipelines, open‑source standards, and coordinated public‑private partnerships.

5. Human‑Centred AI – Ethical Reflections

  • Zameer Brey (Gates Foundation) added a philosophical dimension, urging a shift from the classic Turing Test (machine indistinguishability) to a “wisdom‑bound test” that evaluates how well AI supports societal values, patient dignity, and health equity.
  • Justice Prathiba M. Singh (Delhi High Court) offered a succinct closing statement: “Here’s to a healthier world—where AI and technology serve every citizen, not just the privileged.”

6. Synthesis & Closing Remarks

  • The moderator (likely Payden P., WHO) summarised the discussion, identifying four overarching pillars for advancing equitable AI in health:

    1. Investment Beyond Innovation – Funding must flow into system‑level enablers (governance, data infrastructure, workforce capacity).
    2. Predictability & Trust – Robust, transparent regulatory regimes attract sustainable financing.
    3. Evidence Generation & Transparency – Shared data on safety, efficacy, and cost‑effectiveness builds confidence.
    4. Multi‑Sector Partnerships – Long‑term collaborations among donors, governments, academia, industry and civil society are essential.
  • The moderator concluded that the inflection point has arrived: the debate has moved from “Can AI improve health?” to “Will we invest wisely to ensure AI improves health for everyone?”

7. Audience Interaction & Q&A

  • A brief Q&A segment followed (captured in the transcript as a series of short audience‑member interjections). Key points raised by the audience included:

    • Data Sovereignty: How can low‑income countries retain ownership of health data while still benefiting from global AI models?
    • Financing Mechanisms: What innovative financing tools (e.g., blended finance, results‑based funding) could be mobilised for AI health pilots?
    • Regulatory Harmonisation: How can regional regulatory bodies align standards to avoid duplication and fragmentation?
  • Speakers responded by emphasizing local data‑governance frameworks, public‑private blended funds, and the role of WIPO in standard‑setting.

8. Transition to Next Session

  • The moderator thanked participants, announced a brief intermission, and invited attendees to the upcoming panel “AI for Wix and Badaw – The Capacity‑Building Imperative.”

Key Takeaways

  • Implementation‑First Mindset: Successful AI in health requires concrete pilots that embed technology into routine clinical workflows (e.g., MD&T AI documentation).
  • Data Diversity Is Non‑Negotiable: Without representative datasets, AI risks widening existing health inequities.
  • Remote Surgery Feasibility: Telesurgery 2.0 demonstrates that high‑precision, low‑latency remote operations are technically achievable and could dramatically increase surgical access.
  • Patient Trust Remains a Barrier: Real‑world acceptance of fully autonomous surgical robots is low; transparent communication and inclusive design are essential.
  • Workforce Upskilling Is Critical: Health‑professional education must integrate AI literacy to sustain long‑term, responsible deployment.
  • Governance as Investment Enabler: Predictable, transparent regulatory environments act as the “trust currency” that attracts and locks‑in financing.
  • Human‑Centred Metrics: Move beyond the Turing Test toward assessments that measure AI’s contribution to societal wisdom, equity, and patient dignity.
  • Coordinated Donor Strategies: Funding should align with country‑led priorities, leveraging joint principles for digital health to avoid fragmented efforts.
  • Multi‑Sector Partnerships: Sustainable AI for health rests on enduring collaborations among governments, funders, industry, academia, and civil society.
  • Call to Action: Stakeholders must collectively invest in system‑level foundations—data infrastructure, governance, workforce, and evidence generation—to ensure AI becomes a tool for health equity rather than a source of new disparity.

See Also: