Unpacking Openness and Trust in AI: Global Perspectives
Abstract
The panel examined the tension between openness and control in AI governance. Panelists compared how openness is framed in the United States, Europe, and emerging economies, explored the role of industrial and trade policy, highlighted the strategic importance of “middle‑power” coalitions, and discussed competition law as a tool for digital sovereignty. Through concrete case studies—from open‑source large‑language‑model collaborations to community‑driven speech‑recognition projects—the discussion traced the limits of “binary” openness, the dangers of open‑washing, and the need for democratic, participatory mechanisms that empower a broader set of stakeholders.
Detailed Summary
- The moderator thanked the audience and set the agenda: to explore how openness can foster collaboration, innovation, and trust while being bounded by commercial, state, and security constraints.
- Emphasis was placed on avoiding a simplistic “open vs closed” binary and instead looking at openness across the AI stack (compute, data, models, APIs, applications).
2. United States Perspective – Alondra Nelson
2.1 Openness as a Gradient, Not a Binary
- In the Biden administration, openness was treated as a spectrum; the current administration tends to present “open” as a finished state.
- The socio‑technical conception of openness (borrowed from the open‑source movement) includes democracy, accountability, shared infrastructure, and the ability for diverse communities to modify and reuse technology.
2.2 Limits of a Purely Technical View
- Focusing only on model‑weight releases (e.g., “Llama 2/3”) misses critical dimensions such as training data, API accessibility, and governance mechanisms.
- Geopolitical arguments (e.g., security concerns about China) are often invoked to sidestep the democratic, accountability‑oriented aspects of openness.
2.3 “Safe” Exceptions & the Need for Nuance
- Acknowledged that certain domains (e.g., nuclear‑related AI) may legitimately restrict openness.
- However, a gradient approach can still permit safer, community‑beneficial uses while preserving transparency.
3. U.S. Industrial & Trade Policy – Amba Kak
3.1 “Light‑Touch” vs. “Heavy‑Hand” Reality
- While the administration publicly claims a deregulatory stance, it actively wields tariffs, export controls, semiconductor restrictions, and H‑1B visa premium pricing to shape the AI ecosystem.
3.2 Democratic Deficit in Non‑Regulatory Levers
- Formal rulemaking provides public notice and comment, offering a democratic channel.
- Policy exercised via executive orders, trade measures, and immigration rules lacks such public input, making the process “anti‑democratic” relative to traditional regulation.
3.3 Implications
- The blend of heavy regulatory influence and limited public participation raises concerns about accountability and legitimacy of U.S. AI governance.
4. European / French View – Anne Bouvert
4.1 Geopolitics & the “Middle‑Power” Frame
- The AI Action Summit (Paris, Feb 2023) showcased a shift from a binary “democratic open world vs. closed world” to a coalition of “middle powers” (Canada, France, Germany, Switzerland, India, Japan, Australia, etc.).
4.2 Open Source as a Competitive Lever
- Europe supports open‑source AI to avoid dependence on US/China proprietary stacks and to leverage collective knowledge.
- Not every use‑case should be open (e.g., high‑risk or dual‑use applications), but openness can stimulate competition and lower entry barriers for smaller actors.
4.3 Governance Levers for Coalitions of the Willing
- Public funding, shared compute resources, and open datasets are tools to strengthen digital sovereignty and resilience.
5. Global‑South / Indian Perspective – Astha Kapoor
5.1 Distinct Needs of Emerging Economies
- Structural challenges (health, education) dominate the AI agenda, not merely “technology for technology’s sake.”
5.2 Open Data & Multilingual Datasets as Enablers
- Open multilingual corpora are essential for language preservation and inclusive AI services.
5.3 Risks of “Openness as Adoption” Narrative
- Framing openness solely as a driver of adoption can mask dependency: global‑south users may become cheap labor for data labeling or test‑beds for models built elsewhere.
5.4 Call for Solidarity, Not Homogeneity
- Emphasised that one‑size‑fits‑all policies ignore the varied aspirations of middle‑power versus global‑south nations.
6. Competition Law & Sovereignty – Ravneet Kaur
6.1 Market Study Findings (AI & Competition, released 25 Oct)
- Identified anti‑competitive practices: self‑preferencing, tying, bundling, exclusive agreements, parity clauses, ecosystem lock‑in, targeted price discrimination, opaque systems.
6.2 Access as a Central Concern
- Access to data, compute, and talent determines who can compete in AI.
6.3 Competition as a Sovereignty Tool
- Competition ensures contestable markets, prevents dominance from foreclosing new entrants, and safeguards consumer welfare.
6.4 Transparency & Accountability Requirements
- Calls for technical transparency (understanding model behavior) and governance transparency (how systems are overseen).
7. Empire of AI – Karen Hao
7.1 Knowledge Monopoly as an Empirical Reality
- Large AI firms act as empires by monopolizing data, model development, and expertise, thereby controlling the narrative for policymakers and the public.
7.2 Illustrative Projects
- Bloom LLM: 1,000+ researchers from 70 countries collaborated on an open‑source LLM, integrating cultural institutions to enable transparent data‑governance and value‑sharing.
- Tahiku Media (New Zealand): A nonprofit built a Maori‑language speech‑recognition system using Mozilla DeepSpeech; the project was community‑driven, with consent‑based data collection, education, and iterative co‑design.
7.3 Scale vs. Monopoly
- “Scale” in Silicon Valley often means distribution through a single, closed platform—a monopoly, not true diffusion.
- True scale should involve multiple, community‑driven models serving diverse industries and locales, especially those that are data‑poor.
8. Closing Reflections
| Speaker | Key Reflection |
|---|---|
| Alondra Nelson | Noted the lack of community transparency around data‑center siting (NDAs, secret contracts). Highlighted the summit’s unprecedented community participation (students, “aunts,” NGOs). |
| Astha Kapoor | Emphasised the need for friction and dialogue (e.g., cooperatives with “one‑member‑one‑vote” governance) to turn communities into co‑designers rather than mere users. |
| Ravneet Kaur | Stressed the importance of digital public infrastructure (compute platforms, open data) and small language models for local relevance. |
| Amba Kak | Called for more enforcers (competition, consumer‑protection authorities) at future summits to ensure accountability. |
| Karen Hao | Warned against “corporate inclusion‑speak” that cloaks profit‑driven platform lock‑in. Urged participants to ask whether AI truly solves the real problems they face, and to design from the ground up with genuine empowerment. |
9. Audience Q&A – Highlights
| Questioner | Theme | Summary of Responses |
|---|---|---|
| Micro‑level AI use (audience member) | Individual agency vs. systemic exploitation | Panelists agreed the binary “use or not use” framing is false. Suggested intentional tool selection, open‑source labels, and collective action (protests, litigation) to increase agency. |
| Gender & “All‑Inclusive” vision (audience) | Representation, inclusion of women, Chinese participation | Amba Kak and Astha Kapoor noted that “inclusion” currently translates to market access; broader demographic inclusion (gender, geography) remains limited. Anne Bouvert acknowledged low Chinese representation and suggested more diverse stakeholder invitations. |
| Open‑source Chinese models (audience) | Ideological control, practical use | Karen Hao mentioned that open‑source Chinese models can be fine‑tuned to remove overt CCP bias, but their enterprise uptake shows they’re already influencing the market. |
| Intellectual‑property & openness (lawyer) | IP constraints vs. openness | Ravneet Kaur clarified that the Competition Commission intervenes only when IP is used to abuse market power; otherwise, IP protection is not a competition issue. |
| Labor exploitation in data pipelines (audience) | Compensation for data workers, “protect‑by‑design” | Karen Hao affirmed that labor exploitation is inherent in current AI supply chains and called for a radical redesign of data‑collection practices. |
| Open‑washing & competition assessment (audience) | Assessing genuine openness, need for new tools | Ravneet Kaur explained that competition authorities conduct case‑by‑case rigorous analysis, looking at both public data and internal investigations to detect hidden dependencies. |
Key Takeaways
- Openness is a spectrum that must include socio‑technical dimensions (democracy, accountability, shared infrastructure), not merely the release of model weights or APIs.
- U.S. AI policy relies heavily on industrial, trade, and immigration levers, which lack the public‑comment mechanisms of formal rulemaking, creating a democratic deficit.
- European middle‑power coalitions view open‑source AI as a strategic tool to achieve competitiveness and digital sovereignty while acknowledging that not every use‑case should be open.
- Emerging economies need open multilingual data and community‑driven AI to address structural challenges, but must guard against becoming cheap data labor for external actors.
- Competition law is a critical, under‑utilized lever for digital sovereignty: it can prevent ecosystem lock‑in, ensure transparent access to data/compute, and protect consumer welfare.
- Corporate “inclusion‑talk” often masks platform lock‑in; genuine empowerment requires participatory design, transparent governance, and community ownership of the AI stack.
- Case studies (Bloom LLM, Tahiku Media) demonstrate that open, consent‑based, community‑centric projects can produce socially valuable AI while preserving data sovereignty.
- Scale should not be equated with monopoly; true scalability means multiple, locally‑tailored models serving diverse sectors, especially those that are data‑poor.
- Future governance frameworks need clear labeling of openness, robust competition analysis, and mechanisms for civil‑society oversight to prevent open‑washing and hidden dependencies.
- Individual users are not powerless: through conscious tool selection, collective advocacy, and demand for third‑party transparency labels, they can influence market dynamics and promote ethical AI.
See Also: