International AI Safety Coordination: What Policymakers Need to Know

Detailed Summary

  • Cyrus Hodes (AI Safety Connect) opened with a high‑level description of the rapid escalation of AI investment (“billions‑trillions of dollars”) and the resulting mismatch between capability development and safety governance.
  • He outlined AI Safety Connect’s mission: to foster global majority engagement, showcase concrete governance tools, and convene bi‑annual summits (Paris 2023, New Delhi 2024, Switzerland 2025, plus UN General Assembly briefings).
  • Hodes thanked the co‑hosts—the International Association for Safe and Ethical AI (IASEAI, led by Prof. Stuart Russell) and the Digital Empowerment Foundation (Osama Manzar)—as well as Sympatico Ventures and the Future of Life Institute for sponsorship.
  • He announced the upcoming second‑annual conference in Paris (starting Tuesday at UNESCO) and highlighted that ≈1,300 participants have registered, underscoring growing international interest.

Key Insight: The safety agenda is moving from academic circles to a multi‑stakeholder, geopolitical effort that must include both “global‑north” and “global‑south” actors.


2. Moderator’s Framing (Eileen Donahoe)

  • Dr. Eileen Donahoe framed the discussion as a policymaker‑focused exploration of coordination gaps in frontier AI safety.
  • She highlighted three intertwined challenges:
    1. Speed of AI development outpacing policy cycles.
    2. Fragmented governance across jurisdictions, leading to mixed incentives for developers, investors, and regulators.
    3. Middle‑power agency – the potential for non‑superpower states to leverage pooled resources, normative influence, and regulatory innovation.
  • Donahoe set the agenda: each panelist would discuss (a) lessons learned from existing coordination mechanisms and (b) the single most critical piece of infrastructure needed right now (e.g., an international incident‑response centre).

3. OECD Perspective – Mathias Cormann

3.1 Lessons from Building Consensus

  • Inclusive multi‑stakeholder processes are essential; trust is built on objective evidence and the participation of governments, industry, civil‑society, and technical experts.
  • Policy‑innovation gap: AI’s development pace exceeds traditional legislative cycles, creating “innovation‑oversight gaps.”
  • International consistency reduces compliance costs and limits “policy shopping” by firms.

3.2 Concrete OECD Contributions

InitiativeDescriptionImpact
OECD‑AI Principles (2019)First globally‑endorsed baseline for trustworthy AI, now adopted by ~50 countries.Provides a common language for policy and corporate compliance.
OECD AI Definition & Lifecycle ModelA technical definition adopted in EU AI‑Act, U.S. Executive Orders, and other national frameworks.Aligns regulatory scopes across jurisdictions.
Global Partnership on AI (GPAI) – Co‑chaired by Korea & SingaporePlatform for sharing best‑practice policy and technical knowledge.Facilitates peer learning and reduces duplication.
OECD‑AI Policy ObservatoryCentralised data repository of AI‑related policies worldwide.Supplies evidence‑based inputs for policy design.
AI Incident Reporting Framework (Hiroshima AI Code of Conduct)Piloted at the 2023 AI Action Summit in Paris; 25 organisations from nine countries submitted detailed risk‑management reports.First comparable, cross‑jurisdictional view of developer practices.

3.3 Forward‑Looking Infrastructure Need

  • Coordinated Transparency & Incident Reporting – expand the current reporting framework into a global AI Incident Response Centre that can aggregate alerts, share “near‑miss” data, and protect reporting entities from commercial or legal repercussions (good‑faith reporting).
  • Open‑Source Safety‑Tool Catalog – OECD‑hosted repository of vetted safety‑evaluation tools to lower entry barriers for developers worldwide.

Recommendation: Prioritise global, real‑time incident‑sharing mechanisms and standardised safety‑tool access.


4. Singapore’s View – Minister Josephine Teo

4.1 Singapore’s Geostrategic Position

  • As a small, highly connected knowledge economy, Singapore sits at the nexus of U.S. and Chinese AI ecosystems.
  • The nation cannot dictate the origin of AI systems that its citizens and firms will use, but it can shape the policy translation layer (science → regulation).

4.2 Policy Translation & Effectiveness

  • Policymakers must ask two core questions:
    1. Effectiveness – Does a policy deliver the intended safety outcomes, or is it merely symbolic?
    2. Trade‑offs – What societal costs are incurred by a safety measure, and can they be mitigated?
  • Teo illustrated this with the Aviation Safety Analogy: runway spacing standards are grounded in empirical testing (weather, aircraft type) and must be harmonised internationally to avoid operational friction for airlines.

4.3 Call for International Standards & Interoperability

  • Emphasised the necessity of interoperable safety standards across borders, leveraging OECD‑driven initiatives (Global Partnership on AI, AI Safety Connect, IAISEAI).
  • Warned against fragmentation: without coordinated dialogue, domestic safety frameworks remain siloed, eroding public trust and slowing AI adoption.

Key Insight: Small states can amplify influence by acting as test‑beds for standards and by participating actively in multilateral norm‑setting.


5. Malaysia’s Experience – Govind Singh Deo

5.1 National & Regional Initiatives

  • Malaysia, as ASEAN 2025 chair, placed AI at the centre of the regional agenda via the ASEAN AI Safety Network.
  • The country is finalising an AI National Action Plan and expects an AI Governance Bill to be tabled in Parliament in 2026.

5.2 Building Domestic Capacity‑of — Others**

LessonExplanation
Invest in expertisePolicy‑making must be underpinned by technical specialists who understand fast‑evolving AI risks.
Create dedicated institutions (e.g., a National AI Office) that map sector‑specific risks and coordinate with industry.
Design interoperable standards – Align local standards with regional (ASEAN) and global (OECD) frameworks to enable cross‑border cooperation.
Institutionalise enforcement – Regulations are ineffective without an agency capable of monitoring and penalising non‑compliance.
Iterative learning – Share incident data among ASEAN members so that each country can benefit from the others’ experiences.

5.3 Concrete ASEAN Action Items (Next 12‑18 months)

  1. Formalise the ASEAN AI Safety Network (secretariat, regular meetings, data‑sharing protocols).
  2. Pilot joint AI‑risk simulations across member states to test interoperability of safety standards.
  3. Adopt a regional incident‑reporting template (drawing on the OECD‑AI framework) to ensure timely alerts.

Recommendation: Sustained political will and a clear enforcement mechanism are essential; otherwise the network risks remaining a “talk‑shop.”


6. World Bank’s Role – Sangbu Kim

6.1 Challenges for the Global South

  • Frontier AI models arrive under‑tested, often lacking certification for local languages or risk environments.
  • Developing economies may lack technical expertise and financial resources to evaluate these systems before large‑scale deployment.

6.2 World Bank Strategy

  • Early‑stage safety‑by‑design: assist client governments in embedding safety architectures during the AI system design phase rather than retrofitting later.
  • Knowledge‑transfer partnerships: connect low‑capacity countries with high‑capacity partners (advanced economies, leading tech firms) to share red‑team exercise results and best‑practice incident‑response playbooks.
  • Red‑Team Demonstrations – The Bank is facilitating collaborations where a technology firm conducts adversarial attacks on a prototype AI system, enabling the client to learn defensive measures before rollout.

6.3 Analogy – “Sphere vs. Shield”

  • Kim used a metaphor: AI attacks are like a “sphere” that can penetrate weak defenses; a robust “shield” (AI‑driven security) can block it. The paradox is that both the sphere and shield are built using AI, so the race is to ensure the shield stays ahead.

Key Insight: Co‑development of defensive AI tools is the most viable route for the Global South to stay ahead of emerging threats.


7. Future of Life Institute Perspective – Jann Tallinn

7.1 Super‑Intelligence Prohibition Statement (Oct 2023)

  • The institute released a call for a moratorium on the development of super‑intelligent AI until two conditions are met:

    1. Broad scientific consensus that safe, controllable super‑intelligence is achievable.
    2. Strong public buy‑in (mass‑democratic legitimacy).
  • Over 130 000 signatures were collected, indicating growing public concern.

7.2 Practicality of a Prohibition

  • Tallinn argued that legal bans are hard to enforce given the massive global investment (trillions of dollars) and the competitive advantage AI promises.
  • However, public pressure can shape policy if the world’s economic weight is harnessed through coordinated advocacy.

7.3 Role of Investors

  • He noted that private investors historically shaped AI incentives, but the current “cut‑throat race” among the world’s biggest labs has outpaced investor influence.
  • As AI firms move toward IPOs and public markets, investor leverage diminishes; regulation will need to come from state actors and multilateral bodies.

Recommendation: Emphasise transparent governance and public accountability rather than relying on investor‑led self‑regulation.


8. Rapid‑Action Priorities – Panel Consensus

SpeakerPriority for the Next 12‑24 Months
Mathias Cormann (OECD)Expand global incident‑reporting and transparency; develop a prototype AI Incident Response Centre.
Josephine Teo (Singapore)Refresh the Singapore AI Safety Research Consensus (to keep pace with fast‑moving research) and deploy practical testing tools for model verification.
Govind Singh Deo (Malaysia)Institutionalise ASEAN AI Safety Network; create regional enforcement mechanisms and standard‑harmonisation.
Sangbu Kim (World Bank)Embed safety‑by‑design in early AI project lifecycles for client governments; scale red‑team/penetration‑testing collaborations.
Jann Tallinn (Future of Life Institute)Promote transparency (public disclosure of lab roadmaps) and institutionalise a sustained AI‑safety governance body.
Stuart Russell (IASEAI)Institutionalise the AI‑safety dialogue (permanent forum, funding streams) to ensure continuity beyond ad‑hoc summits.
Cyrus Hodes / Nicolas Miailhe (AI Safety Connect)Accelerate development of open‑source safety‑tool catalog and secure multi‑stakeholder funding for rapid‑deployment pilots.

Overall Message: There is no single silver bullet; a bundle of coordinated actions—incident‑reporting, standards, capacity building, transparency, and institutional continuity—is required to bridge the coordination gap before frontier AI outpaces governance.


9. Closing Remarks & Announcements

  • Prof. Stuart Russell reiterated the urgency, noting that the coordination gap is both real and surmountable. He emphasized the need for continuous, democratic participation in shaping AI safety.
  • Osama Manzar (Digital Empowerment Foundation) invited participants to the next UN General Assembly session in New York, where the fourth AI Safety Connect event will be hosted.
  • The panel thanked all sponsors (Sympatico Ventures, Future of Life Institute, Minduru Foundation) and co‑hosts for making the session possible.

Key Takeaways

  • Speed vs. Policy Gap – AI development is outpacing traditional regulatory cycles, creating dangerous “innovation‑oversight gaps.”
  • Inclusive Multi‑Stakeholder Trust – Trust is built through objective evidence and the participation of governments, industry, civil society, and technical experts.
  • International Consistency Reduces Fragmentation – The OECD AI Principles (adopted by ~50 nations) and the OECD‑AI lifecycle definition have become de‑facto global baselines.
  • Coordinated Incident Reporting Is Critical – A global AI Incident Response Centre, building on the Hiroshima AI Code of Conduct, is the most urgent infrastructure need.
  • Middle‑Power Agency – Countries like Singapore and Malaysia can leverage regional platforms (ASEAN AI Safety Network, GPAI) to influence global norms despite limited domestic AI capacity.
  • Capacity Building for the Global South – The World Bank stresses early‑stage safety‑by‑design and red‑team collaborations to help low‑capacity nations avoid unsafe deployments.
  • Super‑Intelligence Moratorium – While a legal ban may be impractical, mass public advocacy (130 k+ signatures) can pressure governments to adopt precautionary policies.
  • Investors’ Diminishing Influence – As AI firms move toward public markets and state‑driven competition, government‑level regulation will dominate the safety agenda.
  • Immediate Action Items (next 12‑24 months):
    1. Deploy global incident‑reporting mechanisms and a prototype response centre.
    2. Refresh and operationalise the Singapore AI Safety Consensus and develop concrete testing tools.
    3. Institutionalise the ASEAN AI Safety Network with enforceable standards and data‑sharing protocols.
    4. Scale open‑source safety‑tool catalogs via OECD and AI Safety Connect.
    5. Embed safety‑by‑design in early AI project lifecycles through World Bank‑facilitated partnerships.
  • Sustained Multilateral Dialogue – The panel underscored that continuous, well‑funded forums (AI Safety Connect, IAISEAI, UN‑GA assemblies) are essential to keep the governance conversation alive and effective.

See Also: