Governing Safe and Responsible AI within Digital Public Infrastructure
Abstract
The session examined how artificial intelligence can be integrated into digital public infrastructure (DPI) in a manner that safeguards human rights, transparency, accountability and inclusion. High‑level leaders from Estonia, Switzerland, Lithuania and the Netherlands outlined national experiences and the work of the Freedom Online Coalition in shaping global AI‑governance principles. The subsequent panel explored practical mechanisms for AI oversight, lessons from the Global South, industry perspectives on balancing rapid innovation with safety, and the role of multilateral coordination. An audience Q&A probed challenges around explainability and auditability of AI systems.
Detailed Summary
- welcomed the audience and outlined the 55‑minute agenda: opening remarks, panel discussion, audience Q&A.
- described the Freedom Online Coalition (FOC) – a 41‑government network protecting offline rights in the digital sphere and coordinating AI‑and‑human‑rights policy.
- emphasized the session’s focus on algorithmic transparency, human‑centred governance, regulatory best practices, and inclusion, with special attention to experiences from the Global South.
2. High‑Level Opening Remarks
2.1 President Alar Karis – Estonia
- Highlighted Estonia’s digital‑first strategy: “technology must serve people, not the other way around.”
- Stressed that AI must complement human judgment; public authorities need to explain decisions, allow challenges, and embed safeguards against bias.
- Cited AI‑literacy in schools, transparent AI tools in education, and the importance of human‑centred design.
- Reviewed Estonia’s role as chair of the FOC (2025) and the coalition’s principles for rights‑respecting DPI: transparency, lifecycle accountability, inclusive design, privacy/freedom‑of‑expression protection, and civil‑society participation.
- Argued that trust is built through clarity, responsiveness, and the ability to correct course, especially when DPI expands rapidly.
2.2 Bernard Maissen – Switzerland (Chair FOC 2026)
- Positioned DPI (digital identities, public‑service platforms, AI‑enabled decision tools) as core to modern state‑citizen interaction.
- Reiterated the three core imperatives for AI‑enabled DPI:
- Human‑rights‑anchored legal basis – mandatory impact assessments, robust data‑governance, bias safeguards.
- Accountability & oversight – transparent standards, auditability, independent oversight bodies, accessible redress mechanisms.
- International cooperation & capacity building – interoperable standards, avoiding fragmented safeguards, especially for low‑resource states.
- Cited the India AI Summit as a concrete forum for inclusive, rights‑respecting AI dialogue.
2.3 Taurimas Valys – Lithuania
- Described Lithuania’s interoperable state registers (population, property, business, address), e‑health, e‑tax, e‑signature, and e‑residency – used by >90 % of adults.
- Emphasised cyber‑security (national centre, mandatory cyber hygiene, hybrid‑state cloud) as the backbone for trustworthy DPI.
- Noted the Vilnius Convention – the world’s first legally binding treaty on AI, setting global human‑rights standards.
- Announced the creation of the Lit‑AI Center, a national hub linking data, compute, and interdisciplinary expertise for secure, human‑centred AI across public administration, health, security and green transformation.
2.4 Harry Verweij – The Netherlands
- Framed DPI as “social‑contract technology” – infrastructure that must be transparent, contestable and governed in the public interest.
- Stressed the necessity of multi‑stakeholder governance (government, civil society, academia, private sector, affected communities) throughout design, deployment and monitoring.
- Highlighted the Dutch national algorithm register (≈ 1,350 AI systems from 320 public authorities) as a transparency tool.
- Discussed the Netherlands’ advocacy for inter‑governmental AI norms (Council of Europe AI Convention, middle‑power coalition at Davos).
- Asserted that the Global South is a source of innovation and governance models, not a testing ground.
3. Panel Discussion
| Panelist | Core Points |
|---|---|
| Pratik Waigar (Tech Global Institute) | – Effective AI oversight starts before deployment: problem definition, local context, regulatory frameworks, data‑protection regimes, and mandatory impact assessments. – Need grievance redress mechanisms; cited a case where a welfare recipient was declared dead and forced to stage a “wedding procession” to prove life. – Emphasized procurement best practices (due‑process, participatory design, civil‑society involvement, red‑team testing of edge cases). |
| Juan Carlos Lara (Derechos Digitales) | – Latin‑American experience shows transparency frameworks that focus only on algorithms miss the underlying data infrastructure. – Auditability and early civil‑society/academic engagement are essential; exclusion often occurs politically before algorithms are even built. – The Inter‑American human‑rights system provides mandatory obligations, but national implementation (access‑to‑information, equality of service, privacy) remains uneven – a key challenge for operationalising FOC principles. |
| Alexandria Walden (Google) | – Google’s responsible‑AI stance: Bold + Responsible + Together – rapid innovation paired with robust governance, continual monitoring, and multi‑stakeholder engagement. – Transparency via annual responsible‑AI reports, product‑level cues (e.g., Gemini’s error‑disclaimer, source‑linking). – Calls for AI literacy for the public and for governments to embed accountability into procurement and deployment. |
| Norman Schulz (Germany) | – Highlighted FOC’s 2025 DPI Guidelines and the earlier Joint Statement on Responsible Government AI (US‑led). – Stressed multilateral forums as “safe spaces” for like‑minded governments to share impact‑assessment practices and good/bad case studies (e.g., Dutch subsidy‑recovery algorithm that caused social upheaval). – Argued that coordinated exchanges help embed human‑rights values across national AI deployments. |
3.1 Audience Question (Sanhit – technology‑governance researcher)
- Asked why explainability and auditability remain technically difficult.
- Alexandria Walden replied that while the industry acknowledges the need, the heterogeneity of user expertise and the distance between developers and citizens make it harder to provide understandable explanations.
- Norman Schulz added that the FOC DPI Principles (Dec 2025) and the Joint Statement (2025) bind members to create legal frameworks for auditing and remedy, but implementation gaps persist.
4. Key Announcements & Calls to Action
- Freedom Online Coalition – reaffirmed commitment to the rights‑respecting DPI principles (transparency, accountability, inclusive design, privacy, civil‑society participation).
- Vilnius Convention – promoted as a model treaty for globally binding AI standards.
- Dutch Algorithm Register – noted as a practical transparency mechanism for public‑sector AI.
- Lit‑AI Center (Lithuania) – announced as a hub for responsible AI research and deployment.
- Google’s AI‑Literacy Initiative – call for broader public education and collaborative policy‑development.
5. Closing Remarks
- Moderator Zach Lampell thanked the speakers and audience, emphasized that technology must serve humanity, and invited further dialogue after the session.
Key Takeaways
- Human‑centred design is non‑negotiable – AI in DPI must augment, not replace, human decision‑making (Estonia).
- Three‑pillar framework for responsible AI (legal grounding, accountability mechanisms, international cooperation) is echoed across Switzerland, Lithuania and the Netherlands.
- Transparency must extend beyond algorithms to the data pipelines and governance processes that feed them (Latin America).
- Robust grievance and red‑ress mechanisms are essential; real‑world failures (e.g., mis‑declared deceased welfare recipients) illustrate the stakes.
- Multi‑stakeholder participation – civil society, academia, industry and affected communities should be involved from problem definition through deployment.
- International standards and treaties (Vilnius Convention, Council of Europe AI Convention, FOC DPI Principles) provide common reference points for global governance.
- AI literacy for the public and policymakers is critical to ensure understandable, contestable systems and to foster trust.
- National transparency tools (e.g., the Dutch algorithm register) can be scaled and shared as best‑practice models.
- Procurement and impact‑assessment must be embedded early; ad‑hoc or volunteer‑only development risks fragility and bias.
- Multilateral platforms (FOC, AI Summits) enable the exchange of good and bad case studies, accelerating the diffusion of rights‑respecting AI practices worldwide.
See Also:
- the-ai-cyber-nexus-a-strategic-dialogue-on-global-security-trust-and-governance
- toward-collective-action-a-roundtable-on-safe-and-trusted-ai
- panel-on-the-2026-international-ai-safety-report
- navigating-ai-dpi-geopolitics-an-autonomous-and-strategic-approach-for-the-global-majority
- safe-ai-building-shared-trust-and-accountability-infrastructure
- data-people-and-pre-empting-mass-exclusion-building-ethical-ai-as-digital-public-infrastructure
- trust-as-a-global-imperative-how-to-operationalise-safe-ai-for-al
- preparing-to-monitor-the-impacts-of-agents-closing-the-global-assurance-divide-for-safe-and-trusted-ai
- mahaai-building-safe-secure-and-smart-governance
- aligning-ai-governance-across-the-technology-stack