Implementing AI Standards for Global Prosperity in an Era of Agentic AI

Detailed Summary

  • Ashley Casovan thanked the event organisers for handling logistics and introduced the panel.
  • She outlined the purpose: to focus the broad AI‑governance conversation through the lens of agentic AI and to explore what standard‑setting means for this emerging class of systems.

2. Speaker Introductions

SpeakerKey Points from Self‑Intro
Rachel AdamsLeads the Global Center on AI Governance (South‑African think‑tank) and the Leverhulme Centre at Cambridge. Works on the Global Index on Responsible AI and authored The New Empire of AI. Emphasises the need to address both promise and risk of AI, especially for global inequality.
John DickersonCEO of Mozilla AI, a nonprofit focused on an open internet. Warns that a closed‑source, agentic AI‑driven browser could concentrate power over information access, commerce, and everyday online actions. Highlights Mozilla’s community‑driven push for decentralised, privacy‑preserving AI.
Emmanuel KahembweCEO of VDE (UK), a 130‑year‑old non‑profit rooted in electrotechnical standards. Leads the European AI standards committee (JETS 21) and works with OECD on AI governance. Describes VDE’s “trustworthiness” standard (graded A‑G) that aligns technical specifications with the EU AI Act.

3. Why Standards Matter for Agentic AI

3.1 The Need for a Nuanced Lens

  • Ashley framed the conversation: after five‑six years of AI‑governance talk, the community still lacks a nuanced approach to standards.
  • She asked: What kind of standards do we need? – technical specifications, policy‑based compliance tools, or “people standards” (skills, training, workforce competencies).

3.2 Rachel’s View – Indices & Benchmarks

  • Existing governance tools (legislation, legacy standards) are inadequate for generative and agentic AI because they evolve with deployment.
  • The Global Index on Responsible AI acts as a benchmarking tool for governments, enabling peer‑comparison rather than mere political theatre.
  • The first edition (2024) deliberately omitted generative‑AI indicators, recognizing that best practices were still emerging.
  • The upcoming 2026 edition introduces a new framework with audit‑type mechanisms, though still does not include explicit agentic‑AI metrics.

3.3 Emmanuel’s Perspective – Technical Standardisation in Europe

  • Standards vs. Legislation: Standards translate high‑level legal mandates (e.g., EU AI Act’s requirements for fairness, transparency, accountability) into operational definitions.
  • The JETS 21 process brings together a cross‑sector coalition (law, medicine, nuclear, defense) to achieve consensus on terminology and metrics.
  • Europe’s novel approach: standards are developed in parallel with the AI Act, with the European Commission providing real‑time feedback.
  • Two parallel families of standards:
    1. Compliance‑oriented (help organisations meet legal obligations).
    2. Best‑practice/competitive (help firms differentiate; VDE’s graded “trustworthiness” standard is an example).
  • Gradient grading (A‑G) allows markets to signal varying levels of compliance (e.g., stronger privacy focus in Germany).

3.4 John’s Open‑Source Angle

  • Open‑source community (e.g., Mozilla, Linux Foundation’s Agentic AI working group) is building bottom‑up standards for identity, commerce, and robustness.
  • Lack of trust between large tech firms and regulators drives the need for transparent, community‑driven standards.
  • Highlights the emerging Agentic Payments standard (e.g., Google’s AP2) which is grassroots, not yet backed by formal regulatory bodies.

3.5 Global South Challenges – Rachel’s Insight

  • Traditional Western governance models assume well‑funded regulators; many Global‑South institutions are over‑stretched.
  • Calls for modular, sandbox‑style standards that can be adapted locally.
  • Notes that evaluation datasets used in the North often lack cultural relevance; proposes a regional evaluation network for assurance and data‑set sharing.
  • Stresses the importance of meaningful inclusion: Global‑South actors should be co‑authors of standards, not just “seats at the table.”

3.6 Risks & Open Questions – John & Emmanuel

IssueExplanation
Agentic PaymentsRapid adoption of autonomous purchasing and negotiation demands immediate standards to avoid market fragmentation and protect users.
Safety & RobustnessAgentic systems introduce a “lethal trifecta”: untrusted input, access to private data, and ability to egress that data. No current standards address this risk chain.
Risk Scoring for Agentic ActionsJohn proposes a risk‑score framework that evaluates sequences of agent actions (e.g., email‑reading agents) to guide regulation and consumer awareness.
Alignment Across JurisdictionsEmmanuel points out that existing physical‑world standards differ from digital‑world expectations; alignment with NIST (US) and ISO is still nascent.
Standard Development ProcessWhether standards should be top‑down (legislative‑driven) or bottom‑up (open‑source, community‑driven) remains contested.

4. Recommendations & Call‑to‑Action

SpeakerPractical Recommendation
RachelUse social media (e.g., TikTok) to spread awareness of AI standards; engage non‑technical audiences.
JohnBridge silos – professionals should converse with peers from different domains (software, policy, ethics) to translate language gaps.
EmmanuelJoin national standards bodies; contribute to ISO/IEC committees to ensure your perspective shapes international standards.
Ashley (moderator)Emphasised the need for continuous dialogue and inclusive participation across regions and sectors.

5. Closing Remarks

  • Ashley thanked the panelists and organisers, reiterated the importance of collaborative standard‑setting for a safe, prosperous AI future, and closed the session.

Key Takeaways

  • Agentic AI introduces new risk vectors (untrusted inputs, private‑data access, data egress) that current standards do not yet cover.
  • Standards must be multi‑layered:
    1. Compliance‑oriented (e.g., EU AI Act translations).
    2. Best‑practice/competitive (e.g., VDE’s graded trustworthiness).
    3. Open‑source/community‑driven (e.g., Mozilla, Linux Foundation working groups).
  • Benchmarking tools like the Global Index on Responsible AI are valuable for peer‑comparison and policy‑driven accountability, especially when formal standards lag behind technology.
  • Global‑South participation is essential; standards should be modular, culturally aware, and co‑authored by representatives from under‑represented regions.
  • Agentic payments and autonomous commerce are emerging use‑cases that demand immediate standardisation to prevent market fragmentation and protect consumers.
  • Risk‑scoring frameworks for agentic action sequences could provide a practical way for regulators and industry to assess safety without stifling innovation.
  • Open‑source ecosystems (Mozilla, Linux Foundation) are key drivers of transparent, inclusive standards, counterbalancing potential regulatory capture by large incumbents.
  • Engagement pathways: individuals can influence standards by joining national bodies, contributing to international committees, and raising public awareness through outreach.
  • Collaboration across sectors and geographies—bridging “siloe” language gaps—is critical to building standards that are technically sound, socially responsible, and globally relevant.

Prepared by the AI Conference Summarisation Team.

See Also: