Data, People, and Pre-Empting Mass Exclusion: Building Ethical AI as Digital Public Infrastructure

Abstract

The panel examined how AI‑driven public services can unintentionally marginalise vulnerable groups. Drawing on DEF’s grassroots work (2 400+ Community Information Resource Centres and the Samriddhi Gram pilot), the discussion explored where exclusion enters the AI‑enabled digital‑public‑infrastructure (DPI) stack, how existing governance mechanisms fall short, and what concrete design, policy and governance steps are needed to make AI‑powered DPI people‑centred, transparent, and accountable. Perspectives from civil‑society, government, academia, and industry were juxtaposed, and a set of actionable recommendations was produced.

Detailed Summary

  • The session was co‑organised by Digital Empowerment Foundation (DEF), Globethics, AI Safety Connect, and UNESCO’s Women for Ethical AI (South Asia).
  • A technical glitch delayed Jayesh Ranjan (Special Chief Secretary, Telangana) – he would join later.
  • Goal: Identify where algorithmic exclusion can arise in welfare, healthcare, education, finance and governance, and co‑create a people‑centric DPI definition.
  • Structure – three segments:
    1. Where exclusion can enter the AI‑DPI stack
    2. Governance of AI as public infrastructure
    3. Inclusion‑by‑design (gender, trust, industry responsibility)

2. Segment 1 – Where Does Exclusion Occur? (Osama Manzar)

PointSummary
Digital infrastructure must be “affordable, available, and usable”Osama argued that a public‑digital‑infrastructure (PDI) that lives “five km away” or costs ₹ 10 000 for a device is not public. It reproduces the same access barriers as roads, water, or electricity.
Access to services must be at the doorstepUsing a ration‑seeker’s story, he illustrated how biometric authentication can become a pay‑wall (₹ 100 for a token) that defeats the purpose of welfare delivery.
AI is only as good as the data it receivesHe distinguished “data of the people” (captured from communities) from “data for the people” (produced to serve them). If data are untrustworthy or unaffordable, AI cannot be trustworthy.
AI should serve, not extract“If AI cannot help, we don’t need it.” AI should be trustable, transparent, and non‑extractive; otherwise, it becomes a tool for corporations or governments to serve themselves.
Scale of exclusionIn India, ~65 % of the population is not meaningfully connected; any DPI must reach them before AI can be layered on top.
Call for community‑led dataOsama stressed that community‑generated, affordable data are a prerequisite for any ethical AI stack.

3. Segment 2 – Governance, Safety & Slow Harms (Nicolas Miailhe)

  • Historical perspective – Algorithmic discrimination is not new (credit, health, education). AI merely amplifies discrimination at greater speed and scale.
  • Black‑box danger – Modern deep‑learning models (trillions of parameters) are opaque even to their creators (citing Bengio, Hinton, LeCun). Society is building a “civilisation on black boxes”.
  • Inevitable “menu or table” choice – As AI becomes embedded in essential services, opting out will be impossible for most; those who cannot afford to participate will be further marginalised.
  • Analogy with early smartphones – The “freedom to not own a phone” vanished once smartphones became essential for work; a similar loss of choice is looming for AI.
  • Risk of a “bad deal” – If communities have to pay a high price to be included, the values of future generations are compromised.

4. Segment 3 – Ethics, Implementation Gaps & Gender (Paola Galvez Callirgos, Dr Bhavani Rao R)

4.1 Paola Galvez Callirgos (Globethics)

  • Case study – Peru, 2019 – A rural woman was denied a food subsidy because a biometric system failed to recognise her fingerprint. Despite a data‑protection law and adoption of OECD AI Principles, the system excluded her.
  • Why ethics failed
    • Charters are non‑binding → No enforcement when violations occur.
    • Implementation gap → Ethical guidelines remain paper‑only.
  • Real‑world example – Dutch childcare‑benefits scandal – An algorithm with biased assumptions flagged many migrant families as fraud, leading to wrongful cuts.
  • Power asymmetries – Model‑builders, default‑setters, and data owners are concentrated with a few actors; communities have little influence.
  • Call to action
    • Scale up civil‑society participation in AI governance.
    • Secure funding and coordination for NGOs in the Global South.
    • Build technical capacity so CSOs can understand and audit AI.

4.2 Dr Bhavani Rao R (UNESCO Women for Ethical AI)

  • Question posed – “Where does gender exclusion hide when systems claim neutrality?”
  • Answer (summarised)
    • No system is truly neutral; datasets are dominated by male‑centric data, especially in healthcare.
    • Bias is everywhere – from loan approvals to medical dosing.
    • Historical biases (colonialism, patriarchy) are encoded in algorithms.
    • Context matters – Data‑sovereignty, local definitions of poverty, and inclusive language are essential.
    • Informal sector invisibility – Large‑scale AI never sees informal‑economy data, so those populations remain invisible to policy.

5. Segment 4 – Responsible & Trustworthy AI (Ram Papatla, Google)

  • Product‑centric view – Google’s goal is broad distributionmultilingual AI (Gemini, Notebook LM) to serve India’s 22+ major languages.
  • Human‑in‑the‑loop feedback loops
    • Partnerships with Center for Responsible AI, IIT‑Madras and Wadwani AI bring grassroots feedback into product design.
    • Rapid iteration: once a bias is reported, Google can patch the model quickly.
  • Data transparency & civil‑society “stamp of approval” – Google seeks independent audits before launch, especially for language‑specific services (e.g., Hindi education tools).
  • Policy suggestionMinimum Acceptable Risk standards should be set collectively (government, civil‑society, industry). Systems must prove they meet the threshold before deployment.

6. Recommendations & Concrete Changes (Round‑Robin, all panelists)

SpeakerProposed concrete change (to be implemented within two years)
Ram Papatla (Google)Institutionalise stronger, continuous feedback loops between product teams and civil‑society/grassroots users.
Osama Manzar (DEF)Government‑led multi‑stakeholder tables to define minimum acceptable risk for any AI‑enabled public service.
Paola Galvez CallirgosIncrease funding for CSO research and create coordinated networks so more voices from the Global South can be heard.
Dr Bhavani Rao RMandate gender‑impact assessments for every AI system that touches welfare, health, or education.
Nicolas MiailheExpand AI‑Safety Institutes (e.g., India AI Safety Institute) to develop behavioral‑elicitation tools and scientific evaluation of black‑box models.
Jayesh Ranjan (briefly, after joining) – not fully captured in transcript, but implied support for participatory DPI design at state level.

7. Closing Remarks & Announcements

  • DEF announced the Samriddhi Gram pilot – a village‑level DPI demonstrating affordable broadband, digital identity, and AI‑enabled service delivery.
  • Globethics highlighted the forthcoming report “Pathways to Inclusion: Advancing the Role of Civil Society in AI Governance”.
  • UNESCO mentioned a new recommendation on AI ethics and invited participants to download the Women for Ethical AI South Asia chapter report (to be released after the session).
  • Google showcased a new watermarking tool “SynthID” that tags AI‑generated media; its rollout was co‑designed with Indian newsrooms to gather early feedback.

Key Takeaways

  • Digital public infrastructure must be truly public – affordable, physically reachable, and usable at the doorstep; otherwise it reproduces existing exclusion.
  • Algorithmic exclusion is not new but now faster and larger; AI amplifies historic biases, turning “invisible” harms into systemic obstacles.
  • Ethical charters alone are insufficient – they need binding enforcement, funding for civil‑society participation, and technical capacity building.
  • No AI system is neutral; gender, caste, language, and informal‑economy biases are embedded in data and model design.
  • Multi‑stakeholder governance (government, CSOs, industry, academia) is essential to define minimum acceptable risk and to co‑create feedback loops.
  • Feedback (post‑deployment) must be complemented by “feed‑forward” (pre‑deployment) design that anticipates exclusion before products launch.
  • Local data sovereignty and community‑led data collection are prerequisites for trustworthy AI in the Global South.
  • Industry initiatives like multilingual AI, rapid bug‑fixes, and watermarking (SynthID) show that responsible AI is possible when grassroots feedback is baked into development cycles.
  • Concrete policy actions (risk thresholds, gender‑impact assessment, funding for CSO research, AI‑Safety Institutes) can be implemented within two years to curb pre‑emptive mass exclusion.

Prepared from the verbatim transcript of the panel discussion held in Delhi, 2026.

See Also: