Open-source Tooling for Safe, Secure and Trustworthy AI
Abstract
The panel explored how open‑source tools can underpin safe, secure and trustworthy AI, with special focus on the global‑south talent pool and under‑represented language communities. Panelists examined current gaps in benchmarking, state capacity, and hardware accessibility, and they outlined a vision for the next five years—what would signal success or failure of an open‑source AI ecosystem. The discussion closed with concrete recommendations for governments, industry, and the research community, and a call for contributions to the OECD.AI Catalogue of Tools.
Detailed Summary
- The session opened with a series of thanks and an acknowledgement that existing AI benchmarks are not reliable for low‑resource languages or regions.
- A global‑south perspective was highlighted: current efforts (e.g., ML Commons) are valuable but under‑funded; a global collaborative effort is required to broaden representation.
“It can’t be done by one organization or one country; it has to be a global effort.” – [Unidentified speaker]
- The moderator (Amanda Brock) introduced the central question for the panel:
“In five years, what would be a clear sign that an open‑source, trustworthy AI ecosystem has succeeded—or failed—especially for under‑represented regions and communities?”
2. Vision of Success / Failure
2.1 Core Themes Identified
| Theme | Success Indicator | Failure Indicator |
|---|---|---|
| Diversity of contributors | A distributed ecosystem where many regions design and shape AI, not just a handful of firms. | Concentration of AI development in a few hubs (e.g., only five major software centers). |
| Open‑source as a digital public good | Open tools funded by public & private sectors, with transparent governance. | Closed‑source dominance; tools remain inaccessible. |
| Trust embedded by design | Trust metrics become a design prerequisite rather than a post‑hoc test. | Trust is an after‑thought, leading to unsafe deployments. |
| Hardware democratization | Frontier‑model stacks run on affordable hardware, enabling broader experimentation. | High‑cost hardware remains a barrier; only well‑funded labs can advance. |
| State capacity & policy | Robust national AI safety institutes equipped with open tools; collaborative policy frameworks. | Weak state capacity, fragmented regulations, and lack of shared tooling. |
3. Individual Perspectives
3.1 Mark Surman (Mozilla) – Diversity & Economic Power Shift
- Envisions a future “very, very different” from today’s concentration of software design (e.g., smartphones built by a handful of companies).
- Argues that greater diversity in production, economics, and power structures will lead to a healthier AI ecosystem.
3.2 Amanda Brock (OpenUK) – Collaboration, Open‑source Reciprocity, and Outreach
- Stresses the need for cross‑border collaboration amid geopolitical tension.
- Emphasises the mantra “open → give back → open”: open‑source creations must be returned to the community.
- Highlights the “submarine under the digital economy” metaphor: open‑source contributors are often invisible to governments; engagement must be reframed.
3.3 Balaraman Ravindran (CeRAI & IIT‑Madras) – Trust as a Design Principle
- Advocates shifting trust from a post‑hoc verification to a primary design objective.
- Calls for a “trusted AI design chain” where every building block is evaluated for trustworthiness from the outset.
3.4 Oliver Jones (UK AI Security Institute) – State Capacity & Global‑South Network
- Announces a new network for the Global South, aimed at delivering safety tools and building state capacity.
- Positions the network as a way to reduce friction for adopting open tools in under‑served regions.
3.5 Audrey Herblin‑Stoop (Mistral AI) – Cultural & Linguistic Inclusivity
- Argues that trust depends on cultural tailoring and high‑quality datasets in many languages.
- Points to Mistral’s multilingual models (German, Italian, Spanish, etc.) as early steps, but warns that more open data sharing is crucial.
3.6 Karine Perset (OECD) – Institutional Support & Catalogue Call
- Reminds the audience of the OECD.AI Catalogue of Tools call for submissions, positioning it as a concrete way to aggregate open‑source safety resources.
4. Audience Q&A
-
Question to Oliver Jones (UKAC) – AI 2027 Model & Safety Methodology
- Oliver replied that UKAC publishes a plurality of methodologies on its website, including a Frontier AI Trends Report that synthesizes open‑source observations.
-
Open Floor – “What should the audience remember?”
Speaker Key Take‑away (≈ 1 minute) Audrey Build on open‑source: create, benchmark, publish. A vibrant ecosystem is the only path to trust. Amanda The Chinese open‑source trajectory shows that long‑term, coordinated investment drives ecosystem leadership. Funding, conferences, and shared infrastructure are essential. Mark Open‑source infrastructure (bug‑tracking, security processes) must be scaled up; trust becomes a shared stewardship responsibility. Oliver Mozilla’s work on infrastructure aligns with the need to embed trust into open‑source development and make it a community‑wide practice. Balaraman Immediate goal: get the Trusted AI Commons operational; long‑term goal: make trust the first design criterion. -
Additional Remarks (Moderator, Amanda)
- Cited her prior experience as private secretary to Rishi Sunak, underscoring how far the conversation has shifted from a “closed shop” to an open‑source advocacy arena.
- Re‑emphasised the need for government‑backed funding and political commitment to sustain open‑source tooling.
5. Announcements & Closing
- OECD.AI Catalogue of Tools – panelists invited researchers and developers to submit their open‑source safety, benchmarking, and testing tools.
- Trusted AI Commons – a collaborative platform (partnering with OECD and other bodies) aiming to create globally‑representative evaluation protocols.
- Global‑South Network – inaugurated by the UK AI Security Institute, targeting capacity‑building in regions with limited resources.
- The moderator thanked the panelists, distributed token gifts (bags), and closed the session.
Key Takeaways
- Open‑source must be funded and institutionalised: governments, NGOs, and industry need to co‑invest in the infrastructure that enables open collaboration.
- Trust needs to be built‑in, not bolted‑on: design workflows should integrate trust metrics from the start.
- Hardware democratization is essential: without affordable compute, open‑source tools cannot achieve true global impact.
- Cultural and linguistic diversity matters: open datasets covering many languages are a prerequisite for trustworthy AI in the global south.
- State capacity and shared safety tools: networks like the newly‑launched Global‑South AI safety institute are critical for equitable access.
- Success will be visible when the ecosystem is truly distributed – diverse contributors, transparent governance, and open tools used worldwide.
- Failure will be evident if concentration persists and under‑represented communities remain excluded from both development and safety oversight.
- Immediate actions: submit tools to the OECD.AI Catalogue, participate in the Trusted AI Commons, and advocate for policies that treat open‑source as a digital public good.
See Also:
- welfare-for-all-ensuring-equitable-ai-growth-across-the-worlds-largest-and-oldest-democracies
- advancing-ai-safety-across-languages-cultures-and-contexts
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- democratizing-ai-resources-and-building-inclusive-ai-solutions-for-india
- beyond-the-cloud-the-sovereign-ai-moment