The Role of Science in International AI Governance
Abstract
The session opened with a powerful address by UN Secretary‑General António Guterres, who framed science as the essential foundation for trustworthy, evidence‑based AI governance and announced the creation of an independent International Scientific Panel on AI. A fireside chat followed with Yoshua Bengio, who explored how scientific‑policy interfaces operate, the challenges of uncertainty, and the need for neutral, accessible evidence. Brad Smith then delivered a keynote linking the United Nations’ historic role to the current AI era, emphasizing common problem‑definition, the limits of hype, and the importance of shared, science‑driven standards. A rapid‑fire panel featuring experts from the WHO, India, France, and academia examined how evidence can be trusted, what policy signals should steer research, and how AI can both accelerate scientific discovery and pose novel risks. The session closed with Minister Josephine Teo of Singapore, who highlighted the value of small‑state perspectives, the necessity of sustained research investment, and the UN’s unique capacity to foster interoperable, equity‑centred AI governance.
Detailed Summary
- Premise: “We cannot govern what we do not understand.”
- Vision: The United Nations is establishing an Independent International Scientific Panel on Artificial Intelligence – 40 experts representing diverse geographies and disciplines.
- Purpose of the Panel:
- Provide a shared baseline of scientific analyses to move policy from philosophical debate to technical coordination.
- Offer risk‑based guardrails that protect human rights, agency, and safety while enabling innovation.
- Key Points:
- Science‑led governance is an accelerator, not a brake, for solutions.
- A universal scientific language can reduce fragmentation, lower costs, and create interoperable standards (e.g., a New Delhi startup can scale globally because benchmarks are shared).
- Human oversight must be meaningful; accountability cannot be delegated to algorithms.
- The agenda: first report due before the Global Dialogue on AI Governance (July), followed by ongoing assessments to inform SDGs.
2. Fireside Chat – “Science‑Policy Interfaces in Practice” (Yoshua Bengio)
| Speaker | Core Themes |
|---|---|
| Yoshua Bengio | • The interface must distinguish uncertainty (where evidence is strong) from divergence (where scientists disagree). • Analogous to climate‑tipping‑point uncertainty: lack of historical precedent makes policy decisions hard. • Need for neutral, accessible evidence that policymakers can interpret; requires iterative feedback loops between scientists and policy makers. |
| Moderator (Amandeep Gill) | • Questioned whether the rapid pace of AI research hampers the science‑policy loop. |
| Bengio (continued) | • Benchmarks evolve quickly; there is a lag between research publication, empirical studies, and policy response. • Suggests developing high‑level, principle‑based frameworks that remain applicable despite technological changes, and embedding guardrails directly into AI systems to reduce reaction time. |
| Bengio | • Highlighted his recent appointment to the UN panel and expressed concern for developing‑country impacts, urging a multidisciplinary approach to ensure inclusive participation. |
3. Keynote – “Why the United Nations Remains Central in the AI Era” (Brad Smith)
- Historical Analogy: Economic crises repeat roughly every 80 years; the UN was founded ~80 years ago and has proved essential for global stability (e.g., nuclear non‑proliferation, refugee protection).
- Current AI Context:
- AI development mirrors past technological revolutions: rapid, border‑less, and capable of reshaping societies.
- Common problem‑definition is lacking; stakeholders rush to debate solutions before agreeing on the underlying issue.
- Critique of Hype:
- Over‑optimistic predictions have poor track records (average grade ≈ 25 %).
- Emphasises the need for grounded, evidence‑based understanding rather than “crystal‑ball” forecasts.
- Human‑Centric Goal: Use AI to make people smarter, not just to build smarter machines.
- Call to Action: Leverage the UN as a coordinator for shared scientific understanding and governance, with Microsoft committing resources to this effort.
4. Rapid‑Fire Panel (≈ 30 minutes)
4.1. Soumya Swaminathan (WHO) – Trustworthy Evidence for Policy
- Lesson from COVID‑19: Trusted, iterative evidence allowed rapid, adaptive public‑health responses; analogous mechanisms are needed for AI.
- Parallel with IPCC: The new UN AI panel could serve as a global assessment body that links national data‑sharing, tools, and policy guidance.
- Inclusivity: Emphasised voice of diverse stakeholders (women, low‑income communities, remote farmers) to avoid policy blind‑spots.
- Dynamic Policy: Policies may need to precede evidence but must be flexible to incorporate emerging data.
4.2. Balaraman Ravindran (IIT Madras) – Research Priorities Guided by Policy
- Evidence Gaps: Lack of robust data on AI’s impact on children, rural vs. urban contexts, and agricultural productivity in the Global South.
- Policy‑Driven Benchmarks: Need for national‑level benchmarks (e.g., AI‑co‑pilot for farmers) to evaluate efficacy and unintended consequences.
- Example – Education: Preliminary studies show AI adoption correlates with habitual usage, but causality remains unclear, highlighting need for rigorous impact evaluations.
4.3. Anil Ananthaswamy (Science Journalist) – Communicating Uncertainty
- Public Perception: Over‑stated predictions (e.g., job loss) can mislead; nuanced, evidence‑based communication is essential.
- Policy Implications: If 50 % of jobs disappear ⇒ universal basic income; if 80 % transform ⇒ reskilling programs.
- Role of Economists & Labour Organizations: Continuous monitoring of regional labour impacts is crucial.
4.4. Anne Bouverot (France) – French Perspective
- Scientific Panel Support: France backs the UN panel, nominating Joëlle Baral (AI & health) as a member.
- Policy Scenarios: Depending on scientific forecasts (mass job loss vs. transformation), different policy tools (UBI vs. upskilling) become relevant.
4.5. Ajay Kumar Sood (India) – National AI Governance Framework
- Public‑Private Partnerships: Emphasised compute‑resource accessibility and techno‑legal design for AI governance.
- Risk Assessment: Leveraged experience from digital public infrastructure (financial, identity) to embed safety directly into system design.
4.6. Closing Rapid Reflections
- Amandeep Gill (moderator) asked for 30‑second reflections.
- A. Guterres (via “Aan”) – AI can accelerate scientific discovery (e.g., recent Nobel Prizes) if open scientific databases are globally available.
- S. Swaminathan (“Somay”) – The panel should network sector‑specific scientists, prioritize evidence, flag emerging risks, and embed equity at the core of AI for the public good.
5. Closing Remarks (Josephine Teo, Singapore)
- Small‑State Lens: Singapore’s $1 billion AI R&D plan funds foundational & applied responsible‑AI research, plus a Digital Trust Centre and AI Safety Institute.
- Balancing Speed & Caution: Emphasised the need to integrate science and policy to manage rapid AI development while respecting emerging evidence.
- UN’s Unique Role: Highlighted the UN’s legitimacy and inclusiveness to bridge fragmented AI governance initiatives, endorsing the International Scientific Panel.
- Operationalising Principles: Called for standardised evaluation methodologies, capacity‑building for all nations, and treating scientific input as a foundation, not a constraint.
- Regional Initiatives: Mentioned Singapore’s AI Safety Red‑Team Challenge, the ASEAN AI Governance Guide, and upcoming International Scientific Exchange (May 17‑18).
Key Takeaways
- Science as the backbone of AI governance: the UN’s new International Scientific Panel will provide a shared, evidence‑based baseline for policy worldwide.
- Uncertainty management: Effective governance must distinguish between well‑supported evidence, scientific disagreement, and high‑impact but low‑probability risks.
- Rapid research–policy loop: Benchmarks and standards need to evolve faster than traditional academic cycles; high‑level principle frameworks and built‑in guardrails are essential.
- Inclusivity & equity: Policies must incorporate voices from low‑income, rural, and gender‑diverse stakeholders to avoid blind‑spots and ensure global relevance.
- Economic impact pathways: Different scientific forecasts (job loss vs. transformation) demand distinct policy responses (UBI, reskilling, upskilling).
- International cooperation: The UN uniquely enables interoperable standards and legitimacy, mitigating fragmentation across national AI initiatives.
- Small‑state commitment: Singapore exemplifies how even small nations can invest heavily in AI research, safety infrastructure, and regional collaboration.
- Operational challenges: Translating high‑level AI principles (transparency, accountability, fairness, safety) into standardised evaluation tools remains a priority.
- Future direction: Ongoing collaboration between scientists, policymakers, industry, and multilateral bodies will be required to keep AI governance science‑driven, adaptable, and equitable.
See Also:
- scaling-trusted-ai-how-france-and-india-are-building-industrial-innovation-bridges
- best-practices-from-the-international-network-for-advanced-ai-measurement-evaluation-and-science
- ai-for-democracy-reimagining-governance-in-the-age-of-intelligence
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- high-level-panel-discussion-on-ai-bridges-indiaisrael-innovation-dialogue
- scaling-trusted-ai-for-8-billion
- trustworthy-ai-investments-capital-allocations-as-ai-governance
- governing-safe-and-responsible-ai-within-digital-public-infrastructure
- thriving-with-ai-human-potential-skills-and-opportunity
- catalyzing-global-investment-for-equitable-and-responsible-ai-in-health