AI for Inclusive Economic Progress: The Public Services AI stack

Abstract

The panel explored how a shared AI “stack”—the combination of standards, open‑source code, data infrastructures and governance mechanisms—can be leveraged to deliver public services that are transparent, accountable, and culturally appropriate, thereby driving inclusive economic progress. Panelists highlighted the risk that the rapid diffusion of large foundation models could widen existing inequities, underscored the need for sovereign, locally‑tuned AI, and argued that open standards and public‑good licensing are essential to lower entry barriers, avoid vendor lock‑in, and foster sustainable innovation across the Global South. Real‑world examples from India’s UID/UPI ecosystem, Singapore’s public‑service AI pilots, and emerging education‑focused AI stacks illustrated both the opportunities and the policy/technical challenges that must be addressed.

Detailed Summary

  • The moderator opened with a repeated “thank you” cadence, then set the stage: the session would examine sustainable AI infrastructure capable of delivering guaranteed, accountable, transparent public services built on digital public goods.
  • Emphasis was placed on sovereign AI—the idea that governments must retain control over AI systems while also ensuring individual citizen benefits.

2. Speaker Introductions

SpeakerIntroductory Points
Alpesh ShahManaging Director, IEEE Standards – will speak on standards & trustworthiness of the AI stack.
Gabriela RamosHead of UNESCO Task Force on Inequalities – will address what must not be forgotten when building AI‑enabled infrastructure.
Joanna ShieldsCEO, Precognition & Executive Chair, Responsive Wire Foundation – will discuss government‑private sector decisions that create irreversible dependencies.
Mark SurmanPresident, Mozilla – will discuss openness, open‑source, and avoiding vendor lock‑in.
B. RavindranProfessor, IIT Madras – will explain “inclusive economic progress” in practice, especially for India.
Wan Sie LeeDirector, AI Verify Foundation – will contribute perspectives on verification & standards (referred to as “Wanzili” in the transcript).
Amir BanifatemiSenior Executive, Cognizant – moderator’s role in guiding the discussion (also raised the “dependencies” question).

3. Keynote‑Style Contributions

3.1 Gabriela Ramos – Inequality as the Core Context

  • Contextual framing: AI adoption is occurring against a backdrop of high, pre‑existing inequalities (income, opportunity, outcomes) that exist both in developing and advanced economies.
  • Concentration of power: Only a handful of countries produce foundational models (primarily in English). The Global South remains a consumer of these models.
  • Open‑source as a lever: Open‑source models can be re‑framed to suit local contexts, giving governments a tool to govern with AI rather than merely govern AI.
  • Infrastructure gaps: About one‑third of the world lacks stable internet, highlighting the need for government investment in connectivity, skills development, and procurement incentives to ensure inclusive outcomes.

3.2 B. Ravindran – Defining “Inclusive Economic Progress”

  • Digital divide in India: Persistent digital access barriers must be addressed before AI can fuel inclusive growth.
  • Beyond GDP: Inclusive progress includes health, education, and broader social services—AI must be deployed at population scale.
  • Local‑centric stack: Questioned whether the latest frontier models (e.g., GPT‑4) are the right foundation; suggested local models like Sarvam that target Indian language and cultural nuances.

3.3 Joanna Shields – Dependency Risks & Entrepreneurial Enablement

  • Audience poll: Roughly half of attendees were entrepreneurs building AI products, the other half public‑sector employees—showing the need for cross‑sector collaboration.
  • Rapid change: Recent breakthroughs in both proprietary and open‑source models have upended government AI strategies, requiring daily reinvention.
  • Citizen‑service layer: The low‑touch, high‑frequency services (e.g., tax filing, permits) present a massive entrepreneurial opportunity if governments open the playbook and co‑create with startups.
  • Call to action: Governments should “throw the playbook away” and partner with domestic innovators to deliver culturally relevant, scalable services.

3.4 Alpesh Shah – Standards, Interoperability, and Reducing “Tax” on Developers

  • Existing “India‑Stack” as a model: Distributed governance and standards already power India’s UID/UPI systems, offering a blueprint for AI.
  • Standardization as a “tax”: Over‑reliance on 11,000‑page policy documents creates a prohibitive knowledge tax for developers.
  • Desired stack characteristics:
    1. Open standards that lower entry barriers.
    2. Open‑source components that avoid vendor lock‑in.
    3. Interoperability that treats foundational models, data centres, and APIs as commodities easy to swap.
  • Certification pipeline: Proposes a stage‑gate certification framework that can automatically validate compliance across the AI stack, reducing manual effort.

3.5 Mark Surman – Open‑Source, Public Goods, and Avoiding Lock‑In

  • Public‑good principle: Public funds should create digital public goods; open‑source licensing is the proven mechanism.
  • Two “free” models:
    1. Free‑speech (libre) – code can be inspected, modified, and redistributed (truely sovereign).
    2. Free‑beer – code is free to use but closed in terms of inspection and modification, leading to hidden lock‑in.
  • Three ways to avoid lock‑in:
    1. Adopt true free‑speech open‑source for AI models and tooling.
    2. Build domestic talent ecosystems so that local firms, not a few global giants, develop and maintain AI services.
    3. Design for interoperability—treat foundation models and data pipelines as commodities, enabling easy swapping.

3.6 Singapore & India Case Studies (Joanna & Ravindran)

  • Singapore examples:
    • Education: AI tools for self‑directed learning, speech‑writing for ministers, meeting‑note summarisation, and chatbots that draw on government data to answer citizen queries.
    • Human‑capital focus: Programs with Conigson for digital literacy, reskilling, and safe technology use.
  • India examples (Ravindran):
    • UID/UPI stack: Provides granular economic transaction data, enabling novel AI‑driven services.
    • Bodhan AI Initiative: A digital public AI stack for education (open‑access licensing, not necessarily open‑source) slated for rollout over 3‑4 years, with similar upcoming stacks for healthcare, smart cities, agriculture.

3.7 Sovereignty at Multiple Levels (Discussion)

  • Government‑level sovereignty: Control over model selection, data provenance, and local customization.
  • Entrepreneurial‑level sovereignty: Avoid building on rented cloud AI services that create vendor lock‑in; prefer open‑source stacks allowing portability of models and data.
  • Individual‑level sovereignty: Citizens should be able to own their personal data and model “memory”, ensuring privacy and the ability to move it across service providers.

3.8 Accountability, Trust, and Contestability

  • Accountability mechanisms: Need for transparent audit trails, public‑facing contestability (citizens can challenge AI decisions), and continuous human oversight.
  • Trust building: Emphasised that technical standards are necessary but insufficient; cultural relevance, language support, and public awareness are equally vital.

3.9 Incentives, Investments, Infrastructure, Institutions (“Four I’s”)

  • Joanna’s closing point: The four I’sIncentives, Investments, Infrastructure, Institutions—must be aligned to sustain an inclusive AI stack.
  • Other closing remarks:
    • Mark: “Don’t forget the humans.”
    • Alpesh: Context must be front‑and‑center; one size does NOT fit all.
    • Joanna: Emphasised caution and awareness about the cultural bias embedded in predominantly Western training data.

4. Q&A Highlights

QuestionSpeaker(s)Core Response
Dependencies created by past AI investments (Amir)Joanna, Alpesh, MarkGovernments must continuously reinvent strategies; open‑source and standards lower dependency; financial incentives accelerate sovereign solutions.
How to avoid vendor lock‑in (Mark)MarkAdopt free‑speech open‑source, nurture local talent, and ensure interoperability of components.
Accountability & trust at the individual level (Joanna & Alpesh)Joanna, AlpeshBuild transparent contestability mechanisms, improve digital literacy, and embed privacy‑by‑design standards.
Sovereignty for startups (Alpesh)AlpeshProvide portable, sovereign data models, reduce reliance on rented AI services, and create economic incentives for local model development.
One critical factor not to miss (Joanna)JoannaThe four I’s – incentives, investments, infrastructure, institutions – are the backbone of a sustainable AI stack.

5. Emerging Themes & Consensus

  1. Inequality‑aware AI – AI must be purposefully deployed to reduce existing inequities, not amplify them.
  2. Localised models over “global” frontier models – For the Global South, language, cultural nuance, and data sovereignty outweigh raw performance.
  3. Open standards & open‑source as the “tax‑reducer” – They lower entry barriers, enable interoperability, and guard against vendor lock‑in.
  4. Public‑good licensing – Public funds should produce digital public goods that anyone can reuse, adapt, and improve.
  5. Dynamic governance – Rapid model advances demand continuous policy iteration, not static, decade‑long AI strategies.
  6. Human agency & trust – Technical solutions must be paired with education, awareness, and transparent contestability to earn public trust.
  7. Economic incentives – Tax breaks, subsidies, and clear market signals are required to motivate private firms to adopt sovereign, open‑source stacks.

Key Takeaways

  • AI must be framed within existing socioeconomic inequalities; without deliberate design, it will widen the gap.
  • Open‑source, free‑speech licensing is the only reliable path to avoid lock‑in and to preserve national and individual sovereignty.
  • Standards and interoperability act as a knowledge tax reducer, enabling developers to focus on building services rather than parsing voluminous policy documents.
  • India’s UID/UPI ecosystem demonstrates how a digital public infrastructure can unlock novel AI‑driven economic opportunities; similar models are being replicated for education, health, and smart‑city services.
  • Singapore’s public‑service pilots illustrate concrete AI use cases (learning assistants, speech‑writing bots, citizen‑query chatbots) and the importance of human‑capital development for adoption.
  • Four I’s (Incentives, Investments, Infrastructure, Institutions) are essential levers; without coordinated alignment of these, an inclusive AI stack cannot scale.
  • Accountability & contestability mechanisms must be embedded from the outset to sustain public trust and ensure that citizens can challenge AI decisions.
  • Continuous policy renewal is required; the pace of AI advancement outstrips static national strategies, demanding daily iteration and cross‑sector collaboration.
  • Cultural and linguistic relevance is non‑negotiable: AI stacks should be trained and fine‑tuned on local data (e.g., Indian multilingual contexts) to avoid a homogenised “Western” AI worldview.
  • Human agency remains central – technology is an enabler, but the ultimate goal is public services that empower people, not replace them.

See Also: