Applying AI: How AI Innovators and Governments Reshape the Public Sector

Abstract

The panel convened under the auspices of AI Safety Connect to examine how governments and AI innovators can jointly translate trustworthy‑AI principles into concrete public‑sector services and policies. After a brief opening by Professor Stuart Russell and moderation by Eileen Donahoe, senior officials from the OECD, Singapore, Malaysia, the World Bank, and the Future of Life Institute debated the current gaps in international AI safety governance and explored practical steps that policymakers can take in the coming months. Emphasis was placed on the crucial role of “middle‑power” and “global‑majority” states in shaping norms, building coordinated infrastructures (e.g., an international incident‑response hub), and ensuring that AI development proceeds with robust risk‑management frameworks that are inclusive, evidence‑based, and technically sound.

Detailed Summary

1.1 AI Safety Connect’s Mission (Cyrus Sades & Nicolas Miao)

  • AI Safety Connect is a global convening body that meets bi‑annually at major AI summits (Paris 2023, New Delhi 2024, upcoming Switzerland 2025) and at the UN General Assembly.
  • Its goals: accelerate frontier AI safety research, foster global coordination, and deliver capacity‑building and trust‑building exercises for governments, industry, and academia.

1.2 Recent Activities in New Delhi

  • A full day of panels, solution demos, and closed‑door workshops.
  • Notable event: a special address by Prime Minister Dick Schu of the Netherlands on leadership’s role in AI safety.
  • Partnership acknowledgments: International Association for Safe & Ethical AI (Prof. Stuart Russell), Digital Empowerment Foundation (Osama Manzor), and sponsors Sympathico Ventures, Future of Life Institute, Minduru Foundation.

1.3 Transition to the Panel

  • The moderator introduced the upcoming speakers (Mathias Corman, Josephine Theo, Yan Tallinn, Gobin Singh Deo, Sang Bu Kim) and handed the floor to Prof. Stuart Russell for brief remarks on the upcoming Paris conference and the broader safety agenda.

2. Moderator’s Framing (Eileen Donahoe)

  • Key Thesis: While technical solutions to AI safety are essential, governance is the larger, more urgent challenge.
  • Core Question: How can international coordination ensure that only safe AI systems are built and deployed?
  • Emphasized that AI harms—psychological, societal, or existential—cross borders, making global cooperation indispensable.
  • Highlighted India’s role in championing universal participation in AI governance, aligning with the “global‑majority” perspective.

3. Panelist Contributions

3.1 Mathias Corman (OECD) – Lessons from Consensus‑Building

  1. Historical Achievements

    • Development of OECD AI Principles (transparency, robustness, accountability, etc.).
    • Creation of a definition of AI systems that gained broad international acceptance.
    • Operationalization of the Hiroshima International Code of Conduct for AI.
  2. Key Lessons

    • Inclusion is vital: Trust emerges when all relevant actors—governments, private firms, civil society, and technical experts—participate.
    • Evidence‑based processes: Objective data and transparent methodology underpin durable agreements.
  3. Future Infrastructure Needs

    • International Incident‑Response Center (IIRC): Proposed as a “first‑line” hub for rapid coordination when a harmful AI event occurs.
    • Assessment of feasibility: The OECD sees the IIRC as high‑priority but acknowledges significant political and logistical challenges (jurisdiction, data‑sharing, legal authority).
  4. Open Question

    • How to reconcile national sovereignty with the need for a centralized rapid‑response capability?

3.2 Josephine Theo (Singapore) – Policy Levers & Digital Governance

  1. Singapore’s Approach

    • Adoption of a “Regulatory Sandbox” model for AI pilots, allowing controlled experimentation while monitoring risk.
    • Emphasis on data governance (privacy, security) as a pre‑condition for trustworthy AI.
  2. Recommendations

    • Standardized impact‑assessment frameworks across ministries to ensure consistent evaluation of AI deployments.
    • Cross‑border data‑sharing protocols that respect privacy but enable joint risk‑monitoring.
  3. Challenges Highlighted

    • Fragmentation of regulations among neighboring countries hampers seamless AI service delivery.
    • Need for capacity‑building in smaller public‑sector units to interpret technical risk reports.

3.3 Gobin Singh Deo (Malaysia) – “Middle‑Power” Diplomacy

  1. Strategic Position

    • Malaysia, as a global‑majority state, can act as a bridge between tech‑heavy (U.S., China) and development‑focused nations.
  2. Policy Instruments

    • Promotion of regional AI standards through ASEAN, aligning with OECD principles but tailored to local contexts.
    • Incentivizing public‑private partnerships for AI research that embeds safety criteria from inception.
  3. Concrete Action Items

    • Draft a “Regional AI Safety Charter” to be ratified by ASEAN members within the next 12 months.
    • Establish a joint funding pool for AI safety research led by universities in the region.

3.4 Yan Tallinn (Future of Life Institute) – Investor Perspective

  1. Risk‑Adjusted Investment

    • Venture capitalists increasingly demand safety‑by‑design clauses in AI startup term sheets.
    • The Future of Life Institute runs a “Safety Grant” program that funds projects demonstrating concrete risk‑mitigation techniques.
  2. Calls to Action

    • Standardized “AI Safety Audits” for funded projects, akin to financial due‑diligence.
    • Global “Responsible AI Investor Registry” to promote transparency about funding sources and safety commitments.
  3. Open Debate

    • Tension between rapid market scaling (desired by investors) and slow, rigorous safety testing (required by regulators).

3.5 Sang Bu Kim (World Bank) – Development‑Focused AI Governance

  1. AI for Public‑Sector Value

    • World Bank’s AI for Development (AI4D) program pilots AI tools in health, agriculture, and education across low‑ and middle‑income countries.
  2. Key Insight

    • Trust and legitimacy are only achieved when AI solutions are co‑designed with local stakeholders and are transparent about data provenance.
  3. Policy Recommendations

    • Creation of an “AI Governance Toolkit” for ministries, containing checklists, governance templates, and model contracts.
    • Strengthening multilateral financing mechanisms to support AI safety capacity‑building in resource‑constrained governments.

4. Synthesis & Moderator’s Closing

  • Common Themes Across Panelists

    1. Inclusion & Multi‑Stakeholder Governance – all speakers stressed that diverse participation (government, industry, academia, civil society) is non‑negotiable for credible AI safety regimes.
    2. Need for Coordinated Infrastructure – the idea of an International Incident‑Response Center resonated, though feasibility remains contested.
    3. Bridge Between Technical and Policy Worlds – calls for standardized assessment tools, sandbox environments, and audit mechanisms that translate technical risk metrics into policy decisions.
  • Areas of Disagreement / Open Questions

    • Jurisdiction vs. Centralization: Corman’s IIRC proposal versus concerns about sovereignty raised implicitly by the other panelists.
    • Speed of Innovation vs. Safety Controls: Tallinn’s investor perspective highlighted a tension not fully resolved.
  • Actionable Next Steps (as framed by the moderator)

    1. Draft a joint “Middle‑Power AI Safety Roadmap” within the next quarter, leveraging ASEAN, OECD, and World Bank networks.
    2. Pilot a regional AI safety sandbox (Singapore‑Malaysia‑India) to test cross‑border data‑sharing and rapid‑response protocols.
    3. Convene a technical‑policy working group to design a globally‑recognised AI Safety Audit template.
  • Announcements

    • AI Safety Connect’s second annual conference will be held at UNESCO Headquarters, Paris, starting Tuesday (registration still open).
    • The International AI Safety Report and the Singapore Consensus on Global AI Safety Research Project are slated for public release later this month.

Key Takeaways

  • Inclusive, evidence‑based consensus‑building is the cornerstone of durable AI safety governance (OECD).
  • Middle‑power states (e.g., Malaysia, Singapore, ASEAN) can catalyze global standards and act as bridges between tech‑centric and development‑centric perspectives.
  • An International Incident‑Response Center is widely viewed as a critical yet politically sensitive piece of infrastructure for rapid coordination on AI mishaps.
  • Regulatory sandboxes and standardized impact‑assessment frameworks are practical tools for governments to experiment safely while maintaining oversight.
  • Investors are beginning to embed safety clauses in funding contracts, but a universal AI Safety Audit standard is needed to align market incentives with public‑interest risk management.
  • Development agencies (World Bank) emphasize co‑design, transparency, and capacity‑building to ensure AI deployments deliver public‑sector value without eroding trust.
  • Immediate actions include a joint Middle‑Power AI Safety Roadmap, a regional sandbox pilot, and a working group on AI safety audits—steps that could bring policy from rhetoric to measurable impact within months.

See Also: