Democratising Access to AI through Data Infrastructure | Models, Governance and Market Design
Detailed Summary
- Context & Rationale – The moderator reminded the audience that the OECD’s 2019 AI Principles are valuable, but converting them into operational, comparable tools for policymakers, companies, and the public remains a major challenge.
- Hiroshima AI Process – Launched under Japan’s G7 presidency (2023) to improve AI‑system transparency. Central element: an International Code of Conduct that sets expectations for governments and AI developers across the AI lifecycle.
- Reporting Framework – A voluntary, comparable mechanism allowing organisations to demonstrate how they manage AI risks and uphold accountability. To date, 25 organisations from nine countries have submitted reports, all publicly available at oecd.ai/hiroshima.
- Roadmap to Version 2.0 – The moderator outlined three main upgrades slated for the next release (expected Q2 2024, pilot in March):
- More comparable, aggregable data – enabling cross‑organisation analysis.
- Tight integration with the OECD AI‑tools catalogue (≈ 700 tools/metrics). Reporters can suggest new tools directly from the interface, keeping the catalogue current.
- Lifecycle‑wide tailoring – extending from developer‑focused reporting (first round) to include deployers (e.g., Infosys) and other actors across the AI pipeline.
The moderator then introduced the panelists and noted Infosys’s pioneering role as the first Indian submitter, though its representative (Ashish Tiwari) could not attend.
2. Panelist Perspectives
2.1 Yoichi Ida (Japan) – Global Interest & Early Lessons
- Historical Overview – Japan began AI‑governance dialogue in 2016 with the OECD, contributing to the 2019 OECD AI Principles and the G20 AI Principles (India was a key partner).
- Motivation for Hiroshima Process – By 2020, divergent regulatory approaches (US vs. EU) prompted Japan to champion a value‑based, common‑standard approach that could accommodate jurisdiction‑specific differences while preserving a shared baseline.
- Encouraging Signals – Growing participation from >30 private firms and >50 government organisations worldwide; the “Friends Group” keeps the conversation alive.
- Key Challenge Highlighted – Translating voluntary reporting into capacity‑building for governments and civil society; the need for multi‑stakeholder literacy in AI governance.
2.2 Amanda Craig (Microsoft) – Internal Reflections & Lifecycle Opportunities
- Value of the Framework for Microsoft – Served as a catalyst for cross‑team alignment on risk‑management language and practice.
- Gap Between Theory & Practice – Paper‑based risk‑management processes often appear linear, whereas real‑world implementation is iterative and context‑dependent. The reporting exercise surfaced these discrepancies, prompting deeper internal dialogue.
- Strategic Outlook for Version 2.0 –
- Holistic Value‑Chain View – Move beyond developer‑only checks to cover model‑developers, application‑developers, and deployers in a single, coherent narrative.
- Dynamic Component Integration – Recognise that AI components (models, datasets, APIs) evolve rapidly; governance must be equally agile.
- Agentic AI – Anticipates heightened importance of trust mechanisms as autonomous agents become pervasive.
2.3 Paola Goldman (Salesforce) – Market‑Driven Adaptability & Multilingualism
- Why Frameworks Matter for Business – Provide a shared language that helps customers (including SMEs) assess trustworthiness without each vendor inventing its own grading system.
- Dynamic Market & Regulatory Landscape – Rapid tech evolution demands a flexible, updatable framework; static regulations may lag behind.
- Agentic AI Implications – Salesforce builds both its own models and an open‑platform that integrates third‑party AI. Clarifying responsibility boundaries (model provider vs. platform vs. customer) becomes essential when autonomous agents interact.
2.4 Joelle Pinault (Cohere) – Voluntary Commitments as an International Bridge
- Geopolitical Fragmentation – Diverse regulatory regimes (EU, US, Asia) make a single mandatory standard unlikely in the near term.
- Voluntary Reporting Benefits
- Transparency & Trust – Uniform disclosures help customers understand data usage, model properties, and risk posture.
- Innovation Enablement – Reduces compliance overhead for global firms, allowing faster product roll‑outs.
- Policy Feedback Loop – Aggregated voluntary data can inform regulators, accelerating more evidence‑based rule‑making.
- Leadership Acknowledgement – Praised Japan’s role and the OECD’s stewardship in convening the ecosystem.
3. Lightning‑Round – One‑Minute Closing Recommendations
| Speaker | Concrete Improvement Proposed |
|---|---|
| Yoichi Ida | Introduce incentive mechanisms (e.g., market‑recognised trust seals, preferential procurement) to reward organisations that submit reports, thereby sustaining voluntary engagement. |
| Paola Goldman | Clarify actor‑specific responsibilities (developers, platforms, users) quickly and visibly, mirroring the shared‑responsibility model used in cybersecurity. |
| Joelle Pinault | Strengthen multicultural & accessibility dimensions (multilingual reporting, culturally‑aware risk metrics) to ensure global relevance. |
| Amanda Craig | Evolve the framework into a living, iteratively‑updated system with structured data fields that enable comparability and best‑practice extraction, especially as agentic AI emerges. |
4. Audience Q&A – OECD Role for Emerging Markets
-
Question (from Dr. Patayek, Mumbai) – “How will the OECD help large emerging markets such as India, Brazil, and Indonesia adopt AI technologies?”
-
Answer (moderator) – The OECD serves as secretariat for the Global Partnership on AI (GPAI), a coalition of 46 (growing) nations—including India, Brazil, and various African states. GPAI provides a forum for sharing best practices, capacity‑building, and joint projects. The Hiroshima Process, backed by Japan and the OECD, already involves ≈ 60‑61 countries, offering a voluntary reporting channel that emerging markets can leverage to benchmark and improve AI governance without waiting for formal regulation.
-
The moderator also underscored that national initiatives (e.g., Japan’s Friends Group, Infosys’s participation) complement the OECD’s work, driving momentum at the country level.
5. Closing Remarks & Adjournment
- The moderator thanked the panelists, noted that a photo would be taken, and invited any remaining audience questions. The session concluded shortly thereafter.
Key Takeaways
- The Hiroshima AI Reporting Framework is the first international, voluntary, comparable mechanism for organisations to disclose AI risk‑management practices across the entire AI lifecycle.
- Version 2.0 (target Q2 2024) will deliver:
- More standardised, aggregable data for cross‑organisation analysis.
- Embedded tool‑catalog integration (≈ 700 OECD‑listed tools) that stays current via user submissions.
- Expanded coverage to deployers and other actors beyond developers.
- Panel consensus: A shared, adaptable framework is essential for building trust, especially as agentic AI (autonomous agents) proliferates.
- Practical benefits reported:
- Microsoft – exposed gaps between policy paperwork and operational reality, prompting internal alignment.
- Salesforce – gained a common language to convey trust to SMEs and large enterprises alike.
- Cohere – sees voluntary commitments as a pragmatic bridge across fragmented regulatory regimes, fostering both transparency and innovation.
- Challenges identified:
- Capacity‑building for governments and civil society to interpret and use reports.
- Need for incentive structures (trust seals, procurement preferences) to keep organisations engaged.
- Ensuring multilingual, multicultural accessibility so the framework serves non‑Western markets.
- Policy relevance: Voluntary reporting feeds evidence‑based policy making, offering regulators a real‑time view of industry practices before formal rules arrive.
- OECD’s role: Through the GPAI and the Hiroshima Process, the OECD facilitates a global partnership that helps emerging economies (India, Brazil, Indonesia, etc.) participate in and benefit from shared AI governance standards.
Prepared by an AI‑conference summarisation specialist, based on the verbatim transcript and the supplied agenda information.