Building Sovereign Deep Tech for a Resilient Future: Solutions from Finland and India

Abstract

The panel examined the emerging wave of AI‑driven incidents—ranging from cyber‑attacks on critical infrastructure to deep‑fake‑enabled disinformation—and debated how cross‑border governance, standardized reporting, and resilient technical infrastructures can prevent such events from undermining societal security. Drawing on experiences from the OECD, NIST, Japan’s AI Safety Institute, and Brazil’s national AI strategy, the participants highlighted gaps in detection, accountability, and preparedness, and proposed concrete steps toward a shared, lightweight incident‑reporting regime.

Detailed Summary

  • Moderator (Kyle Machado) opened the session, introduced the panel, and thanked the audience. A brief “photo‑shoot” moment preceded the formal start.

  • He cited a recent Future Society Athens Roundtable on “serious AI incident prevention and preparedness,” summarising three recurring themes:

    1. Inadequate detection – current monitoring relies on media scraping; systematic, jurisdiction‑spanning detection is lacking.
    2. Accountability gaps – AI’s global supply chain makes liability attribution difficult; responsibility ultimately lies with the humans who design, deploy, and maintain systems.
    3. Preparedness shortfalls – response capacity, coordination mechanisms, and post‑incident learning are under‑developed.
  • He posed four framing questions for the panel (cross‑border organization, technical‑policy components of incident infrastructure, institutional roles, and linking incidents to “red‑line” risk thresholds).

2. Panelist Contributions

2.1. Marko Grabelnik (OECD AI Incident Working Group)

  • Gap 1 – Sensing: Current incident feeds are limited to mainstream news; social‑media streams are largely ignored because of platform restrictions.
  • Gap 2 – Feedback Loop: Policymakers receive raw incident lists but lack actionable recommendations. The OECD is prototyping a “proactive‑policy‑recommendation” layer that maps incident patterns to mitigation ideas.
  • Incident Taxonomy: OECD classifies incidents into ~14 clusters (e.g., election‑related spikes, autonomous‑vehicle accidents, deep‑fake proliferation). Trends:
    • Election‑related incidents spike around voting periods.
    • Autonomous‑car accidents peaked early (single high‑profile case) then plateaued as the public grew accustomed.
    • Deep‑fakes have surged dramatically in the last 6 months, driven by low‑cost tools (e.g., “Nano Banana”).
  • Catastrophic Scenario: The taxonomy reserves a slot for “catastrophic AI incidents” (still hypothetical). He warned that hype (“AI doomers”) can pressure policymakers to over‑react.

2.2. Elham Tabassi (Former Chief AI Advisor, US NIST)

  • Temporal vs. Jurisdictional Mismatch: Most risk‑management frameworks focus on pre‑deployment assessments, assuming risks can be predicted and contained within a single jurisdiction. Real‑world AI incidents are post‑deployment, often emergent and cross‑border.
  • Limitations of Current Testing: Red‑team exercises, benchmark‑driven threat modeling, and static testing cannot reliably predict behavior when AI systems interact with messy, real‑world environments.
  • Blind Spots:
    • Lack of systematic monitoring; reliance on external researchers or media reports.
    • No shared taxonomy or uniform reporting thresholds across countries.
  • Recommendation: Blend assurance‑centric governance with continuous post‑deployment monitoring and consider post‑decommissioning risks (e.g., lingering models after a company shuts down).

2.3. Akiko Murakami (Japan AI Safety Institute)

  • Institutional Mandate Variation: AI safety institutes differ globally—some are fully independent, others sit within ministries. Japan’s institute acts as a national information hub and is expanding to direct model examination capabilities.
  • Living Framework: Because AI technology evolves rapidly, Japan publishes an incident‑response playbook that is a living document, updated continuously rather than static guidelines.
  • Two‑Tier Working‑Group Structure:
    • Vertical groups focus on sector‑specific issues (healthcare, robotics, finance, etc.).
    • Horizontal groups address cross‑cutting technical safeguards (data quality, model inspection).
  • Collaboration Model: The Institute partners with domestic private firms and aims to scale to international cooperation, emphasizing technical over ethical/human‑rights debates.

2.4. Hugo Valadares (Brazilian Ministry of Science & Technology)

  • Brazilian AI Plan (2024): Outlines five axes with 54 actions, covering human resources, data sovereignty, super‑computing, and cyber‑security (the newly added fourth axis).
  • Current Threat Landscape: Recent large‑scale cyber‑attacks on Brazil’s infrastructure prompted a R$100 million investment in research on child‑online safety and AI‑generated disinformation.
  • International Cooperation Emphasis: Brazil stresses that bureaucracy lags behind technology speed; thus, rapid MOUs and bilateral projects (e.g., sensor networks with Spain) are vital.
  • Key Gap Identified: The political‑administrative lag that hinders timely response to fast‑moving AI threats.

2.5. Caio S. (Brazil, additional speaker)

  • Monitoring System Requirements (responding to a question from Elham):
    1. Decision‑relevant outputs – incident reports must translate into actionable facts for policymakers.
    2. Lightweight adoption – reporting should not be overly burdensome; it must be simple enough for wide uptake.
    3. Standardized, flexible taxonomy – a core set of definitions (e.g., OECD taxonomy) that can be adapted to local legal regimes.
  • Incentive Structures: Highlights the need for positive incentives for private actors to disclose incidents (contrast with cybersecurity where a patch follows the report).

2.6. Ugo (unnamed, possibly from Italy)

  • Cross‑Jurisdictional Information Sharing:
    • Causality: Current incident reports lack causal analysis, hampering policy feedback.
    • Complexity of Modern Agents: Future AI agents will be far more opaque than today’s LLMs, making root‑cause discovery harder.
    • Real‑time Mitigation: Emphasises that reaction time must be near‑instant; delayed mitigation may be ineffective.
    • Business‑Secret Tension: Companies are reluctant to share detailed technical internals; policy must balance confidentiality with transparency.

2.7. Additional Moderation & Audience Interaction

  • Moderator’s Follow‑Up Questions: Probed the panel on practical steps for standardized reporting, thresholds for international escalation, and information that should be shared among jurisdictions.
  • Audience Q&A: Several audience members sought clarification on Brazil’s planned AI Transparency Institute, the role of PIX (Brazilian payment system) in AI‑enabled attacks, and how middle powers can coordinate incident response. The panel reiterated the need for regional MOU networks and shared technical standards.

3. Emerging Themes & Recommendations

ThemeKey Points Raised
Detection / SensingMedia‑scraping insufficient; need systematic, possibly mandatory, reporting pipelines (OECD, NIST).
AccountabilityCurrent liability regimes are fragmented; responsibility ultimately lies with human operators.
Preparedness & ResponseBuild lightweight, standardized reporting; establish feedback loops that output policy‑ready recommendations.
Taxonomy & DefinitionsAdoption of OECD’s 14‑cluster taxonomy as a common baseline; needs extensions for emerging agentic AI.
IncentivesCreate regulatory or market incentives (e.g., certifications, liability shields) to encourage openness.
Cross‑Border GovernanceIdentify home‑grown institutions (national AI safety institutes) as nodal points; promote vertical‑horizontal working groups.
Post‑Deployment MonitoringContinuous risk assessment after AI systems go live, including de‑commissioning traceability.
Data Sharing & ConfidentialityDevelop tiered information‑sharing protocols (public summaries vs. confidential technical details).
Political & Bureaucratic LagRecognise that policy cycles must be accelerated; adopt outcome‑based rather than prescriptive rules.

4. Open Questions & Points of Debate

  • How to define the causal chain of an AI incident in a way that is both technically robust and legally admissible?
  • What thresholds (e.g., severity, scale, cross‑border impact) should trigger mandatory international notification?
  • How to reconcile business‑secret protection with the need for detailed technical data in incident analysis?
  • What governance model best balances national sovereignty with the global nature of AI supply chains?

Key Takeaways

  • AI incidents are rising in frequency, scale, and diversity (cyber‑attacks, deep‑fakes, autonomous‑vehicle mishaps).
  • Current detection relies on media scraping; systematic, possibly mandatory, cross‑border reporting mechanisms are urgently needed.
  • Liability frameworks lag behind the global, multi‑jurisdictional nature of AI development and deployment.
  • Post‑deployment monitoring—including continuous risk assessment and de‑commissioning traceability—is essential; pre‑deployment checks alone are insufficient.
  • A lightweight, standardized taxonomy (e.g., OECD’s 14‑cluster model) can serve as a common language for incident reporting across jurisdictions.
  • Feedback loops that turn incident data into actionable policy recommendations must be built into any monitoring infrastructure.
  • Incentive structures (e.g., liability shields, certifications) are critical to persuade private actors to disclose incidents voluntarily.
  • National AI safety institutes (Japan, Brazil, etc.) can act as hubs for sector‑specific and cross‑cutting working groups, fostering both domestic and international collaboration.
  • Bureaucratic inertia remains a major obstacle; rapid, outcome‑based regulatory approaches are recommended to keep pace with AI evolution.
  • Cross‑border cooperation—through MOUs, regional AI safety networks, and shared technical standards—is the only realistic path to a resilient AI ecosystem.

Prepared by the AI Conference Summarization Team.

See Also: