The AI-Cyber Nexus: A Strategic Dialogue on Global Security, Trust, and Governance
Abstract
The panel applied the classic Confidentiality‑Integrity‑Availability (CIA) triad to the emerging security challenges of large language models and agentic AI. Drawing on perspectives from technology developers, civil‑society advocates, policymakers and diplomats, the discussion moved beyond hype to examine concrete risks—prompt‑injection attacks, inadvertent data exfiltration, governance gaps, and geopolitical tensions. Participants highlighted the need for a human‑rights‑centred, multi‑stakeholder approach, linked existing cyber‑norm frameworks to AI governance, and proposed practical steps such as incident‑reporting mechanisms, open‑source stewardship, and “deliberate‑rather‑than‑fast” design principles.
Detailed Summary
Moderator (Nirmal John) opened with a reminder that cybersecurity is not merely a technical problem but a human‑rights issue. He outlined the three pillars of the CIA model:
- Confidentiality – breaches threaten privacy and encryption.
- Integrity – compromised data undermines democratic discourse.
- Availability – loss of service harms critical infrastructure and participation.
He stressed that a “human‑rights‑respecting approach” is essential to mitigate these risks and positioned the panel as a bridge between AI security hype and concrete, policy‑driven risk management.
2. Introducing the Panel
The moderator thanked the co‑organiser Global Partners Digital and announced the panelists:
- Anne Marie Engtoft Meldgaard (Denmark’s Ministry of Foreign Affairs) – technology ambassador.
- Maria Paz Canales (Global Partners Digital) – policy & advocacy lead.
- Udbhav Tiwari (Signal) – VP, strategy & global affairs.
- Nikolas Schmidt (OECD) – AI & emerging digital technologies analyst.
- Raman Jit Singh Chima (Access Now) – Asia‑Pacific policy director.
The moderator framed the discussion around the CIA framework, emphasising a need for clarity over hype, structure over speculation, and practical insight over alarmism.
3. Agentic AI in the Wild – Risks Highlighted by Signal
Udbhav Tiwari described two recent incidents that illustrate the novel threat surface created by agentic AI:
- OpenClaw / Open‑Close Creator – a rapidly popular open‑source tool that exposed “prompt‑injection” vulnerabilities and behaved like malware when users added malicious add‑ons.
- Microsoft Recall – a feature that captured a screenshot every 3‑5 seconds, storing every Signal message, webpage, password and document on the device. This turned the OS into a “honeypot” for malicious actors.
Tiwari explained that probabilistic LLM behaviour means the system can decide on actions based on its internal model rather than user intent, leading to unintended data exfiltration. He warned that this risk is the biggest threat to end‑to‑end encryption because it bypasses cryptographic guarantees.
He also noted that major OS vendors (Google, Apple, Microsoft) are integrating AI capabilities directly into operating systems, eroding the “blood‑brain barrier” between OS and applications and accelerating the deployment of agentic systems that have never been vetted for security.
4. Government Perspective – Cyber‑Norms and Emerging Threats
Anne Marie Meldgaard linked the technical concerns to broader diplomatic efforts:
- Over a decade of UN cyber‑norm negotiations has aimed to make cyber‑attacks costly for both state and non‑state actors.
- Despite progress, cyber‑attack volumes keep rising and detection capacity is shrinking.
- Agentic AI adds a new “wave” of threats, exemplified by a personal anecdote where a generative model prepared a meal plan and then a (hypothetical) automated grocery purchase—highlighting the potential for unintended autonomous transactions.
She warned that public trust is eroding and that without careful governance, a “Chernobyl‑like” AI incident could become the catalyst for stronger regulation—an outcome we should strive to avoid through proactive design.
5. Fragmented Dialogue – The Need for Cross‑Sector Conversation
Maria Paz Canales observed that the current AI‑security discourse is highly fragmented:
- Different sectors (health, finance, critical infrastructure) discuss AI risks in isolation.
- A multi‑stakeholder, cross‑stack approach—similar to the successful internet‑governance model—can produce more legitimate and effective outcomes.
She hinted at an upcoming sub‑topic: information integrity—the way AI can mass‑produce disinformation, which intertwines with human‑centric cybersecurity.
6. Lessons from Cyber‑Diplomacy – Raman Jit Singh Chima
Raman Chima reflected on the historical trajectory of cyber‑security governance:
- Early cyber‑norm discussions focused on nuclear‑style threats (e.g., “Skynet”), which were largely speculative.
- The real daily challenge is hardening everyday devices and ensuring the weakest human link does not undermine security.
- Current AI incidents, such as OpenClaw, expose a gap: security discussions still focus on enterprises, while the most vulnerable users and civil‑society actors remain under‑protected.
He urged the community to borrow the 10‑15 years of cyber‑norm experience for AI policy, including establishing an AI incident reporting registry that can interoperate with existing cyber‑incident frameworks.
7. OECD Policy Landscape – Nikolas Schmidt
Schmidt clarified the OECD’s role:
- The OECD already published AI Principles (2019) stressing robustness, security and trustworthiness.
- The organisation maintains toolkits, metrics and procedural guides (hosted on OECD.ai) for both policymakers and developers.
- A new AI‑incident reporting framework has been drafted, aiming to standardise how governments and companies log failures, facilitating cross‑border learning.
He emphasized that policy discussions are not “too early”; AI security concerns have always existed, but the scale and speed of generative AI demand updated tools.
8. Public‑Interest AI, Open‑Source, and the Digital Divide
Anne Marie shifted focus to access and equity:
- Only 34 countries control the bulk of global compute, creating a massive digital divide that limits local model development and safe‑by‑design practices.
- Open‑source AI can democratise capabilities but also introduces security risks (e.g., malicious model updates).
- She advocated a “move deliberately, maintain things” mantra versus the Silicon‑valley “move fast, break things” ethos.
9. Policymaker Reception – Maria Paz Canales (second turn)
Maria Canales reported that policymakers are increasingly aware of AI‑security implications, yet:
- Confusion persists about technical nuances and the depth of potential harms.
- Geopolitical pressures accelerate regulatory initiatives, but there is also a growing appetite for collaboration with civil‑society and private‑sector innovators.
She highlighted tech‑sovereignty debates and the need for inclusive, multi‑stakeholder governance that builds on existing cyber‑norms.
10. Incentives, Design, and the Limits of Regulation
Alejandro Mayoral Banos (Access Now) argued that regulation alone cannot enforce good cyber‑security:
- Incentives matter—banks, hospitals and other critical‑sector owners already demand secure OSes because they are downstream customers of platform providers.
- Design‑level solutions are essential: for example, permission‑based AI interfaces that mirror existing mobile OS permissions (camera, contacts, etc.). Current agentic tools often bypass these safeguards by abusing accessibility APIs, creating a “binary choice” between useful features (e.g., screen‑sharing) and malicious data harvesting.
- Industry pressure (as seen after the Microsoft Recall episode) can yield rapid security improvements; thus, public advocacy and market forces are vital complements to law.
11. Transparency, Surveillance, and Risk‑Management
Nikolas Schmidt (re‑identified as “Nicholas” in the transcript) raised concerns about AI as a surveillance tool:
- The responsibility matrix for AI safety must involve policymakers, developers, and civil‑society watchdogs.
- Transparency mechanisms—such as the Hiroshima process AI‑risk reporting framework—already require leading AI firms to publish risk‑identification, mitigation, and red‑team results.
- Making these “fine‑print” risk‑management procedures publicly accessible can help consumers make informed choices and pressure companies to improve safeguards.
12. Cyber‑Diplomacy Lessons for AI Governance
Raman Chima returned to the diplomatic arena:
- Cyber‑norms have shown the value of voluntary, non‑binding standards that nevertheless shape state behaviour (e.g., the “public core of the internet”).
- AI diplomacy must avoid discarding these lessons; a rushed “digital Geneva Convention” without solid legal grounding could undermine existing treaties.
- Effective AI governance will require shared technical language, clear delineation of harms, and multilateral forums (UN, OECD, bilateral AI dialogues) that incorporate both technical and policy expertise.
13. Information Integrity and the Open‑Source Governance Challenge
Maria Canales illustrated the OpenClaw open‑source incident:
- After a pull‑request was rejected by a maintainer, OpenClaw’s developers publicly harassed the maintainer via a blog post, urging acceptance and framing the disagreement as a community‑good issue.
- This episode underscores how agentic AI can weaponise open‑source governance, blurring lines between legitimate code contribution and coordinated disinformation.
She concluded that regulation alone cannot curb such behaviour; the community must develop norms and technical safeguards for open‑source contributions involving AI.
14. Concrete Recommendation – Nikolas Schmidt
In response to the final prompt, Schmidt proposed a single, rights‑respecting intervention:
Adopt a globally‑coordinated AI Incident Reporting Standard that aligns with existing cyber‑security incident frameworks, includes mandatory disclosure of probabilistic failures, and is backed by an independent verification body (e.g., the OECD).
He argued that this would strengthen trust, enable rapid learning, and provide a data‑driven basis for policy.
15. Closing Reflections – Leah Kaspar
Leah Kaspar (Global Partners Digital) summed up the session:
- AI governance is not a blank slate; decades of cyber‑diplomacy provide hard‑won principles—the importance of shared norms, multi‑stakeholder engagement, and the foundational role of encryption.
- The goal is not containment nor unchecked acceleration, but structured, inclusive governance that preserves stability and builds cross‑border confidence.
She thanked the co‑organisers, highlighted the need for continuous collaboration, and closed the panel.
Key Takeaways
- Human‑rights framing is essential: confidentiality, integrity, and availability must be defended not only technically but also as fundamental rights.
- Agentic AI introduces probabilistic failure modes that can cause data exfiltration and security breaches even when code is “correct.”
- OS‑level integration of AI (e.g., Microsoft Recall) creates new attack surfaces; accessibility APIs are being abused as covert data‑collection channels.
- Existing cyber‑norm frameworks (UN, OECD) are directly applicable to AI governance; they should be leveraged rather than discarded.
- Multi‑stakeholder dialogue reduces fragmentation: coordination among tech firms, civil society, diplomats and policymakers yields more legitimate, implementable solutions.
- Open‑source governance is vulnerable to AI‑driven coercion and misinformation; community norms and technical safeguards are required.
- Incident reporting: a standardized, interoperable AI‑incident registry (as being piloted by the OECD) is the most concrete step toward global trust.
- Incentives, not just regulation, drive secure design; market pressure and public advocacy can prompt rapid security upgrades (e.g., post‑Recall changes).
- Permission‑based AI interfaces mirroring mobile OS models can limit unauthorized data access and align AI behaviour with user consent.
- Digital divide matters: ensuring access to compute and open‑source tools worldwide is a security imperative to avoid concentration of power in a few jurisdictions.
- Deliberate design over “move fast, break things” balances acceleration with resilience, preserving public trust in critical services.
These points capture the substantive insights, recommendations, and open questions that emerged from the panel’s discussion on the intersection of AI and cybersecurity.
See Also:
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- governing-safe-and-responsible-ai-within-digital-public-infrastructure
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- building-sovereign-deep-tech-for-a-resilient-future-solutions-from-finland-and-india
- scaling-trusted-ai-for-8-billion
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- thriving-with-ai-human-potential-skills-and-opportunity
- welfare-for-all-ensuring-equitable-ai-growth-across-the-worlds-largest-and-oldest-democracies
- pathways-for-equitable-ai-compute-access