Building Trust: Digital Infrastructure Fit for the AI Era

Abstract

The panel examined how trust can be embedded in the digital infrastructure that underpins today’s AI ecosystem. Participants highlighted the need for data‑empowerment and protection architectures (DEPA), sovereign‑first AI agents, interoperable data‑portability standards and agile public institutions capable of keeping pace with rapid technological change. Cross‑sector collaboration—bringing together governments, private innovators, civil‑society and consumer advocates—was positioned as essential for creating resilient, values‑driven AI governance at both national and international levels.

Detailed Summary

  • Moderator set the stage by noting the transition from early internet‑building by governments to a private‑sector‑driven AI era.
  • Key Point: Aligning rapid AI innovation with societal goals requires credible, trustworthy institutions and human‑centric, bottom‑up approaches.

2. Trust Foundations in Digital Public Infrastructure (DPI)

2.1 Credibility, Leadership & Institutional Vision

  • Arvind Gupta described the “core success formula” behind large‑scale DPI projects in India:
    • Strong leadership vision.
    • Institution building around credibility and trust.
    • Pilot‑to‑scale methodology.

2.2 Data Empowerment & Protection Architecture (DEPA)

  • Arvind introduced DEPA, a layer of the India Stack that will sit beneath the emerging AI stack.
  • DEPA’s purpose: enable personal‑data empowerment—share data with consent, for a limited purpose, while protecting it from misuse.
  • Requirements: legislative backing, regulation, and adherence to democratic values (openness, accountability, transparency, opportunity).

2.3 Risks of Centralised, Surveillance‑Based Platforms

  • The panel warned that current AI platforms risk re‑creating the same centralisation mistakes of the last internet generation, potentially “speed‑running” past errors.

3. Building an Alternative Tech‑Value Stack

3.1 Decentralisation of Trust & Sovereign Agents

  • Tomicah Tillemann (Project Liberty) highlighted the emergence of AI agents that operate their own social networks (“Moldbook”, “Cloudbots”).

  • Problem identified: most agents now serve the interests of large AI firms (e.g., OpenAI, Tesla) rather than individual users.

  • Proposed Solution: Develop sovereign agents that are accountable to individuals, manage personal data per user wishes, and transparently disclose their affiliations.

3.2 Data Portability & Interoperability

  • Tomicah reiterated that interoperability—the ability to move data across models and services—is essential to avoid the “data‑as‑product” trap.
  • Sarah Nicole stressed the need for dual code:

    • Technical code (APIs, standards).
    • Legal code (legislation granting data‑control rights).
  • Procurement Standards: Supheakmungkol Sarin (“Mungul”) referenced the need for government procurement frameworks that embed trustworthy‑AI standards.

  • Economic Models: Emphasised that both public procurement and consumer spending choices must align with trustworthy‑AI incentives.

4. Institutional Agility & Global Governance

4.1 Agile Institutions

  • Robert Opp (UNDP) argued that institutional agility—the capacity of civil servants and regulators to adapt quickly—is foundational for trustworthy AI.

  • Key Elements:

    • Continuous capacity‑building.
      audience‑centric policymaking.
    • Embedding democratic values throughout the governance process.

4.2 International Processes & Speed Gap

  • Robert outlined existing UN efforts:

    • Annual AI policy dialogues.
    • Newly‑formed International Scientific Panel on AI.
  • He noted the speed mismatch between multilateral decision‑making and rapid AI advances.

5. The Role of Civil Society & Consumer Advocacy

5.1 Consumer Advocacy as Check‑and‑Balance

  • Vidisha Mishra (Global Solutions Initiative) reflected on her experience with the EU AI Act, emphasizing that:

    • Agility is needed in global frameworks.
    • Consumer advocates provide essential feedback loops to guard against profit‑driven incentives.
  • Sarah Nicole added that consumer organisations must be part of a movement (Project Liberty’s alliance of ~200 NGOs) to push for interoperability, data‑portability, and rights‑based standards.

5.2 Funding Gaps & Capacity for Civil Society

  • Sarah highlighted that less than 1 % of AI‑safety funding goes to civil‑society groups, creating a capacity gap for advocacy, red‑ress mechanisms, and public education.

6. Addressing the Upskilling & Access Challenge

  • Robert (answering a question) stressed that access to compute, reliable data, and affordable training are all components of a robust ecosystem.
  • He reported that private‑sector partnerships and government‑led programmes are increasingly targeting capacity‑building in low‑ and middle‑income countries.

7. Sovereign‑First Models & Interoperability Bottlenecks

  • Arvind responded to a question about sovereign‑first AI models (SLMs):
    • Regulation will enforce data localisation and consent‑based sharing.
    • DEPA can serve as the intermediary layer to allow cross‑model data exchange while respecting sovereignty.

8. Emerging Norms & Legislative Examples

  • Sarah cited a new Virginia state law mandating data interoperability and portability for AI communications, noting similar moves across multiple jurisdictions.

9. Audience Q&A (Condensed)

Question ThemeSummarised QueryRepresentative Answers
AI as complement vs. displacementHow can governments ensure broad access to AI while up‑skilling remains limited?Robert: ecosystem approach—affordable compute, data, and training; public‑private capacity building is essential.
Legal / normative infrastructureCall for baseline AI governance norms, possibly a white‑paper.Sarah: Ongoing legislative work (e.g., EU AI Act, US state laws) but speed remains a hurdle; movement‑based advocacy needed.
Consumer advocacy roleHow can consumer groups bridge global‑local gaps?Vidisha: Advocate for a minimum, non‑negotiable global framework, while ensuring local contexts are respected; need agile mechanisms.
Sovereign‑first & interoperabilityWill sovereign‑first models create new silos?Arvind: DEPA’s consent‑based, purpose‑limited sharing can keep data interoperable across sovereign models.
Funding for civil‑societyFunding for safety research is heavily skewed to private AI development.Sarah: Highlighted the <1 % figure; urged creation of dedicated funding streams for civil‑society capacity.

10. Closing Remarks & Announcements

  • Moderator thanked the panel and audience, highlighted the need for more AI Impact Summits, and invited participants to join Project Liberty’s alliance.
  • Announcement: The panel’s discussion will be distilled into policy briefings for international platforms (UN, G7, G20).

Key Takeaways

  • Trust must be built into the core of AI infrastructure through credible institutions, legislation, and human‑centred design.
  • DEPA (Data Empowerment & Protection Architecture) is positioned as the foundational layer for a trustworthy AI stack, enabling consent‑based, purpose‑limited data sharing.
  • Sovereign agents—AI agents accountable to individuals rather than corporations—are essential to avoid a repeat of the “data‑as‑product” model.
  • Interoperability and data portability are non‑negotiable; without them, users remain locked into proprietary platforms.
  • Agile public institutions (with continuous capacity‑building) are needed to keep pace with rapid AI advances.
  • International governance is moving (UN AI policy dialogues, scientific panel, state‑level laws) but suffers from a speed mismatch; coordinated multilateral action remains critical.
  • Consumer advocacy and civil‑society involvement provide vital checks, demand transparent red‑ress mechanisms, and must be resourced (currently <1 % of AI‑safety funding).
  • Upskilling, compute access, and affordable data are core ecosystem ingredients; public‑private partnerships are the primary delivery vehicles.
  • Sovereign‑first AI models can coexist with interoperable data layers (e.g., DEPA) to prevent siloed ecosystems.
  • Legislative momentum is evident (e.g., Virginia AI data‑portability law); scaling such norms globally requires a coordinated advocacy movement.

See Also: