Responsible AI for Health: Governance, Implementation and Investment Considerations
Detailed Summary
- Samir introduced the session, thanked attendees, and noted the goal of moving AI from “talk” to scale.
- He acknowledged a delayed start, introduced the panelists, and emphasized that the audience should view the discussion as a networking opportunity for future collaborations.
2. Intellectual Property as an Enabler (Ulrike Till, WIPO)
- Patents: Provide economic incentives, legal certainty for investors, and a disclosure mechanism that ultimately enriches the public domain.
- Copyright: Protects training datasets, code, and model outputs; without clear copyright, firms may keep data as trade secrets, hindering rollout.
- Balanced Approach: IP must be “calibrated with access” – strong enough to motivate innovation, yet structured (e.g., licensing, open‑access clauses) to ensure equitable deployment.
- Emphasized that IP is one cog in the ecosystem and must work alongside other gears such as governance, ethics, and data sharing.
3. Ethics, Governance, and “Ethics‑by‑Design” (Andreas Reis, WHO)
- WHO’s ethical principles (first released 2019, updated 2021, and LLM‑specific guidance 2024) have been widely downloaded.
- The main gap: translating high‑level principles into national legislation and operational practice.
- Introduced the concept of ethics‑by‑design – embedding ethical considerations directly into model development, not as a post‑hoc add‑on.
- WHO has launched an online training course for AI designers and programmers to teach practical implementation of these principles.
4. Industry Perspective: Constitutional AI & Benchmarks (Soumya, Anthropic)
- Constitutional AI: A framework where a “constitution” of ethical rules guides model training before deployment, ensuring alignment with societal values.
- Highlights low‑hanging‑fruit use cases: clerical and administrative tasks that free clinicians to focus on patient care.
- Stressed the need for robust benchmarking and evaluation to guide iterative model improvement.
5. Evaluation & Evidence Generation (Dr Mona Dugul, ICMR)
- Calls for cross‑learning across countries; emphasizes that India already has state‑level pilots (e.g., in the south) that can inform national policy.
- Pragmatic trials: Traditional randomized controlled trials (RCTs) are too slow; proposes AI test‑beds, digital twins, and rapid‑feedback loops.
- Wants to develop centres of excellence that can act as evaluation hubs, feeding results back to developers and health systems.
6. Funding, Scale‑Up, and Governance (Emily Muller, Wellcome Trust)
- Evaluation agenda must be multi‑dimensional: clinical effectiveness, cost‑effectiveness, health‑system integration, equity impact, model drift, and behavioural change.
- As a funder, Wellcome will aggregate expertise, support network building, and promote reusable evaluation methods.
- Emphasized that developers alone often lack resources to conduct full‑scale evaluations, creating a role for funders to bridge the gap.
7. Investment & Operational Realism (Shandendra Krishnan, Gates Foundation)
- Pilotitis is a major barrier; many pilots never move beyond proof‑of‑concept.
- Three criteria for viable pilots: (1) Operational reality (works in primary health centres, with frontline workers), (2) Alignment with national digital infrastructure (e.g., India’s ABDM), (3) Governance & procurement pathways (budget lines that explicitly fund AI).
- Human factors: training, behavioural readiness of clinicians, and patient trust are essential for scale.
8. Data Infrastructure & Regulatory Landscape (Devlin, Google Health)
- Data gaps: Indian‑centric datasets are scarce and often not ready for model training.
- Public awareness: Very few citizens understand data‑interoperability or consent; increasing awareness is crucial for data sharing.
- Regulatory ecosystem: Varied across regions (EU, UK, US, India). Google tracks regulations closely, collaborates with policymakers, and advocates for holistic, end‑to‑end regulation that includes public health, private sector, and consumer perspectives.
9. Cross‑Border Knowledge Flow & Community Building (Wellcome & Gates)
- Wellcome: Funds high‑impact research cohorts, supports communities of practice, and facilitates global mentorship networks.
- Gates: Works with national advisory groups (e.g., in Andhra Pradesh), leverages expert panels to bring global AI expertise to local implementation.
- Both emphasise capacity building within ministries, academic institutions, and civil‑society organisations to sustain the ecosystem.
10. Panel Synthesis & Audience Call‑to‑Action (Samir Pujari)
- Re‑iterated that governance is the central lever for scaling AI in health.
- Urged participants to forge concrete partnerships (government‑industry‑academia) and to share data assets responsibly.
- Provided a WHO contact email (ai@who.int) for follow‑up.
11. Closing Remarks & Logistics
- Moderator thanked panelists and audience, reminded attendees of the next session, and asked participants to continue discussions off‑stage.
Key Takeaways
- Governance matters most – without clear policies, procurement pathways, and budget lines, AI pilots remain isolated experiments.
- IP is an enabler, not a barrier – patents and copyright can incentivise innovation while ensuring knowledge dissemination when paired with equitable licensing.
- Ethics‑by‑design must be embedded early – WHO’s training and Anthropic’s constitutional AI illustrate practical ways to operationalise ethics.
- Robust, pragmatic evaluation is essential – AI test‑beds, digital twins, and rapid‑feedback loops are preferred over lengthy RCTs.
- Data sovereignty and accessibility – Indian‑centric, high‑quality datasets are needed; public awareness of consent and interoperability must improve.
- Cross‑sector partnerships accelerate scale – Funders (Wellcome, Gates) are actively linking governments, academia, and industry through advisory groups and community‑of‑practice models.
- Human factors cannot be ignored – clinician training, behavioural change, and patient trust determine whether AI tools are adopted in routine care.
- Low‑hanging‑fruit use cases (administrative, clerical tasks) provide quick wins and free clinicians to focus on complex care.
- Funding should cover full‑spectrum evaluation – technical validation alone is insufficient; cost‑effectiveness, equity, and system integration must be assessed.
- Future vision – Panelists envision AI contributing to disease cures and health‑system strengthening by 2030‑35, but only if the ecosystem of governance, data, ethics, and partnerships is solidly built today.
See Also:
- data-sharing-infrastructures-for-ai-building-for-trust-purpose-and-public-values
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- publicly-accessible-data-and-ai-training-safeguards-for-responsible-reuse
- catalyzing-global-investment-for-equitable-and-responsible-ai-in-health
- ai-innovators-exchange-accelerating-innovation-through-startup-and-industry-synergy
- ai-impact-forum-democratising-ai-resources
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- multistakeholder-partnerships-for-thriving-ai-ecosystems
- governing-safe-and-responsible-ai-within-digital-public-infrastructure