Agentic AI Roundtable

Abstract

| Kambiz (final speaker) | Inclusive multilateral forums – ITU/UN “AI for Good”, OECD, and other global bodies for coordination. |

3.2. Open Questions & Debates

  • Scope of liability: Who is ultimately accountable when an AI agent makes a harmful decision? (Consensus: human owners/companies).
  • Degree of autonomy vs. oversight: How far can an agent progress before “human‑on‑the‑loop” is insufficient? (Discussion circled around FAA drone model).
  • Standard‑setting process speed: ISO standards take years; how can rapid innovation be accommodated? (Consensus: voluntary, consensus‑based standards (NIST, CASE) are interim bridges).

4. Policy Panel – “What Should Policymakers Focus On?”

Structure: Each policy panelist introduced themselves and highlighted one priority for governments.

PanelistOrganizationPriority Recommendation
Jennifer MulvaneyAdobePut humans before models – policy should evaluate human impact first.
Ellie SakhaeiGoogleEmphasise human‑in‑the‑loop governance across the autonomy continuum.
Carly RamseyCloudflarePromote open, interoperable standards and global harmonisation (NIST↔️Singapore).
Sam KaplanPalo Alto NetworksBase policy on security‑first standards; understand the three‑dimensional risk surface of agents.
Danielle Gilliam‑MooreSalesforceAllow sector‑specific regulatory frameworks (e.g., finance, health) that leverage existing expertise.
CombisServiceNowDeliver practical, operational guidance (playbooks, benchmarks) rather than abstract principles.
Kambiz(unspecified)Use multilateral forums (ITU, UN “AI for Good”) to ensure inclusive, global coordination.

4.1. Suggested Multilateral Platforms

  • OECD AI Principles & Reporting Framework – widely referenced by US states, EU AI Act, and APAC ministries.
  • International Consortium of Safety Institutes – for tactical standards and taxonomy of agentic‑AI security.
  • Singapore International Cyber Week – annual venue for cross‑government dialogue on cyber‑policy and AI.
  • Technical Benchmarks for Multi‑Agent Systems – academic‑industry collaborations to validate emergent behaviors before deployment.

Recommendation: Policymakers should anchor their work in OECD‑based principles, complement them with sector‑specific safety‑institute consortia, and foster global standards harmonisation through events like Singapore Cyber Week.

5. Closing Remarks

  • The moderator thanked the panel, reiterated the business‑policy pairing of the roundtable, and directed attendees to an ITI white‑paper on agentic AI (available at itic.org).
  • A brief transition cue signaled the move from the business discussion to the policy segment (≈ 90 seconds).
  • The session ended with applause and gratitude to all participants, and a reminder of the AI Impact Summit’s partnership with the Government of India.

Detailed Summary

  • Introduced the Center for AI Standards and Innovation (CASE), formerly the USAI Safety Institute, re‑branded in June 2025 under Secretary of Commerce Howard Lutnick to focus on standards and innovation rather than safety‑only.
  • Explained CASE’s dual positioning:
    • Front‑door for industry to interact with the U.S. government.
    • Embedded in NIST, which historically creates voluntary standards that drive economic growth.
  • Announced three key initiatives:
    1. AI‑agent standards initiative – a request for information (RFI) on AI‑agent security.
    2. Publication from NIST’s Information Technology Laboratory on AI identity & verification (open for public comment).
    3. Sector‑specific listening sessions (April) targeting health‑care, education, and finance to surface adoption barriers.

Announcement: RFI on AI‑agent security; NIST publication on AI identity & verification; upcoming sector‑specific listening sessions.

2. Business‑Use‑Case Showcase (Agentic AI in Industry)

2.1. Synopsys – “Agentic Engineers”

Speaker: Prith Banerjee

  • Synopsys provides electronic design automation (EDA) tools for chip design (used by Nvidia, AMD, Qualcomm, etc.).
  • Described the transition from “chip‑to‑system” to a “chips‑to‑systems” model after the $35 bn acquisition of Ansys.
  • Highlighted the pace of innovation (new car or chip every year instead of every 3‑5 years) and the explosive growth in transistor count (million → trillion) which makes human‑only design infeasible.
  • Introduced “Agentic Engineers” – autonomous AI agents that perform low‑level reasoning tasks (e.g., layout, verification) while human engineers remain in the loop for higher‑level decisions.
  • Emphasized that these agents operate in a “physical AI” context (physics‑based simulations) rather than solely in the large‑language‑model space.

Key Insight: Agentic AI augments, not replaces, human engineers, enabling design cycles that match the accelerated market tempo of modern automotive and semiconductor products.

2.2. Mastercard – Autonomous Payments & Security

Speaker: Caroline Louveaux

  • Mastercard has decades of AI‑driven fraud detection; the shift now is from recommendation to action (AI agents that act on transactions).
  • Use‑case: real‑time fraud detection and transaction blocking – decisions made in milliseconds at global scale.
  • Stressed four guardrails that must accompany agentic payments:
    1. Know Your Agent – verification & trust of the AI ‘actor’.
    2. Security‑by‑Design – tokenization & secure credential handling.
    3. Clear Consumer Intent – explicit authorization (illustrated by a “sushi‑order” mishap).
    4. Traceability & Audibility – end‑to‑end logs for dispute resolution and regulator confidence.
  • Noted that these guardrails do not impede adoption; they enable scalable trust.

Recommendation: Embed security‑by‑design, consumer‑intent validation, and full audit trails into any agentic payment system.

2.3. NetApp – Data‑Center Agentic AI for Data Quality & Risk Management

Speaker: Syam Nair

  • NetApp provides multi‑cloud storage; agents are positioned close to the storage controller to prepare data at source for AI consumption (reducing data movement).
  • Highlighted the criticality of data quality for AI; agents can extract structured value from unstructured data on‑premises.
  • Discussed a risk‑management use case: agents monitor cyber‑threat exposure (average breach time ≈ 59 seconds) at the data layer, enabling rapid response.
  • Described the agent maturity ladder (Level 1 – assistive copilot; Level 5 – fully autonomous). NetApp is currently around Level 3.

Key Insight: Placing agentic AI at the storage layer tightens data‑quality pipelines and provides near‑real‑time security monitoring.

3. Enterprise Guard‑Rails – Panel Discussion

Facilitated by: Jason Oxman (moderator)

3.1. Overview of Guard‑Rail Themes

SpeakerGuard‑Rail Emphasis
Austin Mehron (CASE)Bottom‑up, industry‑driven standards – encourage companies to submit RFI comments, participate in listening sessions; use standards to address PII handling in regulated sectors.
Prith Banerjee (Synopsys)Safety‑critical autonomous systems – illustrate the catastrophic risk of a cyber‑compromised autonomous vehicle or aircraft; stress full verification & validation before hardware prototyping.
Caroline Louveaux (Mastercard)Four‑point payment guard‑rail (Know‑Your‑Agent, security‑by‑design, consumer intent, auditability).
Syam Nair (NetApp)Multi‑level data‑layer guard‑rails: public‑private partnership for standards; governance of data lineage; agents cannot take accountability – humans must.
Jennifer Mulvaney (Adobe)Human‑first policy – “Humans before models”; ensure AI deployments serve human welfare.
Ellie Sakhaei (Google)Human‑in‑the‑loop continuum – move from “human‑in‑the‑loop” to “human‑on‑the‑loop” as agent autonomy grows; analogy to FAA drone‑pilot models.
Carly Ramsey (Cloudflare)Global accessibility & standards harmonisation – open models & open standards; ensure inter‑regional compatibility (e.g., NIST vs. Singapore frameworks).
Sam Kaplan (Palo Alto Networks)Three‑dimensional risk picture – agents add kinetic consequences; security must be a foundational layer for trust.
Danielle Gilliam‑Moore (Salesforce)Governance vs. regulation – emphasize standards, ISO processes, and sector‑specific regulatory frameworks (e.g., UK, Indonesia).

Key Takeaways

  • Agentic AI is moving from assistive tools to autonomous “agents” that can act in real time across critical domains (chip design, payments, data‑center operations).
  • Security‑by‑design, traceability, and clear consumer intent are non‑negotiable guardrails for any enterprise deployment.
  • Industry‑driven, voluntary standards (NIST, CASE, ISO) are the preferred mechanism for rapid, cross‑border adoption; they are more agile than top‑down regulation.
  • Human‑in‑the‑loop/on‑the‑loop frameworks must evolve alongside agent autonomy; policies should codify the appropriate level of oversight for each risk tier.
  • Global harmonisation is essential – OECD principles, safety‑institute consortia, and multilateral forums (e.g., Singapore Cyber Week, ITU/UN) provide the scaffolding for consistent, inclusive governance.
  • Data governance is a cornerstone: agents draw decisions from data; robust lineage, quality, and privacy controls are required to prevent manipulation.
  • Policy should focus on outcomes, not technology – regulate the risk‑profile of applications rather than the underlying AI models, enabling future‑proof regulation.
  • Collaboration between public and private sectors (listening sessions, RFI submissions, standards bodies) is the most effective path to unlock safe, scalable agentic AI.

Prepared from the verbatim transcript of the “Agentic AI Roundtable” held at the AI Impact Summit, Delhi (2026).

See Also: