AI Diplomacy: Asia Leads – Governing Artificial Intelligence in a Fragmented World

Abstract

The panel explored how governments can cooperate when AI‑driven incidents erupt at machine speed, far faster than traditional diplomatic processes. After framing three illustrative “AI crisis” scenarios (financial cascade, deep‑fake election interference, and autonomous‑infrastructure failure), the discussion moved to the structural gaps in current international law, the limits of existing regulatory tools, and concrete proposals for an “AI crisis diplomacy” architecture anchored in trust, rapid verification, and coordinated technical response—particularly from an Asian leadership perspective.

Detailed Summary

  • AI Safety Asia described its mission to bring together policy‑makers, diplomats, researchers and industry to address governance and crisis implications of advanced AI, especially from Asian and Global‑South perspectives.
  • The session is part of a series on AI crisis diplomacy – the need for cross‑border coordination when AI incidents unfold in seconds with uncertain attribution.
  • A “Firestarter” paper (circulated beforehand) defined an AI crisis as a cross‑border event that outpaces any single authority’s ability to manage it, emphasizing the lack of existing verification channels.

2. Illustrative Scenarios (Moderator)

ScenarioCore Challenge
Financial cascade – an AI model misreads data, issues a false policy signal, triggering automated trading across jurisdictions.No rapid, trusted verification channel; risk of mutual blame.
Deep‑fake driven instability – a hyper‑real synthetic video appears before an election or peace negotiation, spreads across platforms in multiple jurisdictions.Speed of verification lags behind political timelines; damage already done before debunking.
Autonomous‑infrastructure incident – an AI‑controlled system in one country makes a decision that impacts another country’s critical infrastructure.Legal/authority ambiguity: who is responsible, who can act, when human decision‑makers are absent.

The moderator highlighted that each scenario shares uncertainty + speed as the decisive coordination obstacle.

3. Professor Stuart Russell – Why Agentic AI Ups the Governance Stakes

  1. Historical view – AI has existed for ~80 years; we are now in a boom driven by large‑language models (LLMs) and increasingly agentic systems (systems that perceive, decide and act).

  2. Myth‑busting

    • Myth 1: “Regulation can’t keep up.” Russell countered with the example of medicine regulation – standards endure regardless of technological change; risk tolerance, not technology, drives regulation.
    • Myth 2: “Liability is impossible for AI.” He noted that liability is a millennial‑old tool, but tech firms routinely disclaim liability (e.g., Microsoft EULA limiting compensation). Restoring robust liability regimes would provide a fast, technology‑agnostic lever.
  3. Limits of liability – It does not deter extinction‑level risks because if the risk materialises, the liable party may no longer exist to pay.

  4. Myth 3: “General‑purpose technologies cannot be regulated.” Russell disputed this by pointing to electricity, aviation, nuclear power – all heavily regulated despite being general‑purpose.

  5. Risk‑based regulation & “behavioral red‑lines”

    • Set a risk ceiling (e.g., acceptable probability of catastrophic failure).
    • Define red‑line behaviours (e.g., AI must not provide instructions for building biological weapons, must not impersonate humans).
    • Require demonstrable evidence that systems stay below the red‑line breach probability appropriate to the risk tier.
  6. Quantifying extinction risk – Russell suggested a benchmark of “≤ 1 in 100 million per year,” noting that CEOs of leading AI firms currently estimate much higher (10‑50 %).

  7. Regulatory call‑to‑action – Require companies now to prove safety at that risk level; otherwise, future compliance becomes impossible as systems become more capable.

(Russell’s remarks set the technical and normative foundation for the rest of the panel.)

4. Panelist Contributions

4.1 Wan Sie Lee – Singapore’s Pragmatic, Outcome‑Based Approach

  • Singapore regulates outcomes, not technology, applying sector‑specific rules (finance, health, HR, etc.).
  • Example: Deep‑fake law enacted for election periods – a narrow, surgical instrument that defines a deep‑fake, mandates platform takedown, and protects citizens from election‑related manipulation.
  • Emphasised the need for cross‑border coordination on high‑impact AI services (e.g., ChatGPT) and the role of diplomacy in building that coordination.

4.2 Dr. Yuko Harayama – Japan’s Trust‑Building and Technical Evaluation

  • Highlighted existing crisis‑coordination institutions (pandemic, cyber‑security) as a model for AI.
  • Stressed the importance of joint testing and evaluation to create mutual trust among governmental AI teams.
  • Proposed that regular, low‑stakes collaborations (information‑sharing, joint drills) lay the groundwork for rapid phone‑call‑level coordination when a crisis hits.

4.3 Prof. Alejandro Reyes – Institutional Gaps & the Need for New Practices

  • Noted that AI amplifies existing crises by compressing response time; humans are not equipped to decide quickly enough.
  • Argued for new frameworks that embed human‑to‑human, nation‑to‑nation, institution‑to‑institution channels, reinforced by mutual trust.
  • Emphasized that AI‑driven crises lack social consequences that usually drive diplomatic urgency, making technical solutions even more critical.

4.4 Azizjon Azimi – Central Asian Perspective & Self‑Regulation Debate

  • Described Tajikistan’s role in drafting a UN General Assembly resolution (adopted 25 July 2023) that established a regional AI centre and recognized AI self‑regulation mechanisms.
  • Presented a libertarian critique of over‑regulation: AI is an instrument for human decision‑making; imposing stricter standards on machines than on their human users may disadvantage developing economies.
  • Shared a startup case study (Zippel) illustrating how commercial testing already acts as de‑facto regulation, while heavy‑weight EU AI‑Act compliance remains a barrier to market entry for Tajik firms.
  • Called for a risk‑based, outcome‑oriented framework, allowing sovereign innovation while still safeguarding against high‑impact failures.

4.5 Additional Remarks (Unidentified/Brief Interjections)

  • A brief, unnamed senior panelist echoed Lee’s point that regulation should be calibrated to risk, not uniformly applied.
  • Several participants highlighted the need for co‑responsibility models where governments, private sector, and users share accountability, moving beyond classic principal‑agent paradigms.

5. Cross‑Cutting Themes from the Discussion

ThemeKey Points
Speed vs. DiplomacyAI incidents evolve in milliseconds; traditional diplomatic timelines are insufficient.
Liability & Red‑LinesRestoring robust liability and defining clear behavioral red‑lines are essential for rapid accountability.
Trust & VerificationBuilding trusted communication channels (e.g., Singapore’s “short‑code” verification, Taiwan’s whitelist) reduces misinformation spread.
Existing Crisis InfrastructureLeverage and adapt pandemic, cyber‑security, and financial crisis mechanisms for AI.
Joint Technical EvaluationRegular cross‑border testing builds confidence and creates a common language for AI safety.
Risk‑Based RegulationAlign regulatory burden with the probability and magnitude of harm; avoid one‑size‑fits‑all mandates.
Regional CooperationProposals for a regional AI crisis liaison network (technical hotline) modelled on APCERT’s cyber‑incident framework.
Inclusivity of Developing NationsEnsure that regulatory regimes do not stifle AI innovation in the Global South; incorporate self‑regulation and capacity‑building.

6. Video Intervention – Dr. Audrey Tang (Taiwan)

Building BlockSummary of Tang’s Proposal
TrustCreate a whitelist for official communications (e.g., Taiwan’s “111” SMS shortcode) – a low‑friction, verifiable channel that citizens can instantly trust.
ConsensusDeploy AI‑driven large‑scale listening tools (e.g., Polis, Talk‑to‑the‑City) that surface bridging statements and provide auditable, traceable summaries of public opinion.
SafetyTreat AI incidents as civil‑defence matters; establish a regional AI crisis liaison network (technical hotline) that builds on existing cybersecurity contact frameworks (APCERT).
Strategic OutlookPosition Asia as a supplier of safety infrastructure, not just a rule‑taker; leverage initiatives like India’s digital public infrastructure (AT & UPI) as models for cross‑border operational readiness.

Tang emphasized that institutionalized, pre‑negotiated technical channels are the only way to prevent improvisation when millisecond‑scale crises occur.

7. Closing Remarks (Moderator)

  • Acknowledged the breadth of perspectives and the need for concise, actionable next steps.
  • Invited participants to continue the dialogue through AI Safety Asia’s network and to contribute to the development of the regional AI crisis liaison.

Key Takeaways

  • AI crises unfold far faster than diplomatic processes; existing international law lacks the “real‑time verification” mechanisms needed to prevent escalation.
  • Liability and behavioral red‑lines are timeless regulatory levers; reinstating robust liability and defining clear prohibited AI behaviours can provide rapid, technology‑agnostic oversight.
  • Risk‑based, outcome‑oriented regulation (as practiced in Singapore and advocated by Tajikistan) balances safety with the innovation needs of developing economies.
  • Trust channels such as verifiable shortcodes or whitelist messaging are low‑cost tools that significantly reduce the impact of deep‑fake and impersonation attacks.
  • Joint technical evaluation and regular low‑stakes collaboration build the inter‑governmental trust required for rapid crisis response.
  • A regional AI crisis liaison network (technical hotline) modeled on cybersecurity incident‑response frameworks can institutionalize cross‑border coordination without demanding full political alignment.
  • Inclusive governance must give developing nations a voice and allow self‑regulatory mechanisms that complement, rather than replace, international standards.
  • AI‑driven crises amplify existing systemic risks (financial, political, infrastructural); addressing them requires both technical tools (large‑scale listening, verification) and institutional reforms (civil‑defence‑style response).
  • Asia is positioned to lead by combining its digital public‑infrastructure expertise (e.g., India’s UPI, Taiwan’s SMS whitelist) with collaborative regional bodies to create durable, cross‑border AI safety infrastructure.

Prepared from the verbatim transcript of the AI Diplomacy panel at the India AI Impact Summit (Delhi, 2024). All speaker attributions reflect the provided speaker list and contextual cues in the conversation.

See Also: