Empowering Communities in the Age of Advanced AI: Inclusion and Safety for Sustainable Development

Abstract

The panel explored why AI safety cannot be separated from sustainability and inclusion, especially for the Global South. Using a four‑act structure, speakers examined (1) the existential stakes of unsafe AI, (2) policy and governance pathways, (3) concrete implementation experiences from large‑scale language‑AI projects in agriculture, education and public services, and (4) research‑driven recommendations for operationalising safety at scale. Throughout, the discussion underscored that safety, equity, and agency are mutually reinforcing, and that globally‑relevant standards must be crafted without a “Global‑South vs. Global‑North” binary.

Detailed Summary

1. Opening Remarks & Demo Showcase

  • The moderator opened by linking AI safety to sustainability and inclusion, framing them as “two sides of the same coin.”
  • A rapid demo highlighted real‑world misuse: (i) terrorist planning (Las Vegas Cybertruck attack) assisted by early‑stage chatbots; (ii) extremist recruitment facilitated via jail‑broken GPT‑4.1; (iii) deep‑fake political video that falsely announced the withdrawal of an Irish presidential candidate, illustrating how synthetic media can destabilise democratic processes.

2. Jan Talen – The Existential Stakes

  • Key Insight: “Full automation of humans” pursued by leading AI firms could lead to either (a) human extinction in the worst case, or (b) a world where humans have no jobs and minimal purchasing power.
  • Flywheel Effect: Reduced human economic relevance → AI‑centric economy → further marginalisation of humans.
  • Global‑South Vulnerability: Borders cannot shield South nations; AI will be deployed globally regardless of local isolation.
  • Call to Action: Policymakers in the Global South must actively monitor frontier AI firms, apply diplomatic and trade pressure, and pursue coordinated slow‑down strategies to enforce safe practices.

3. Stuart Russell – Safety as a Non‑Negotiable Prerequisite

  • No Trade‑off: Benefits of AI appear only when safety is guaranteed – analogous to nuclear energy (Chernobyl) and aviation safety (737 MAX tragedy).
  • Distinguish AI Types: Large language models (LLMs) are “human imitators”; many development‑relevant AI systems (e.g., AlphaFold, climate models, irrigation decision‑support) are not LLMs and thus have different safety profiles.
  • Missed Opportunities: Current AI deployment focuses on advertising revenue rather than high‑impact societal tools (e.g., personalised tutoring, collective decision‑making).
  • Recommendation: Re‑orient research and industry toward AI that augments human agency rather than replaces it.

Act 2 – Power, Policy, and Global Governance

1. Robert Opp (UNDP) – Translating Safety into Development Programs

  • Landscape Assessments: Completed AI ecosystem analyses in 20 countries, with 10 more underway, identifying national AI needs, capacity gaps, and governance readiness.
  • Capacity Building: Emphasised the need for trained civil servants and cross‑sector expertise to manage AI responsibly.
  • Re‑imagining Trust & Safety: Launched a global “Trust & Safety Re‑imagination” contest (400+ entries, 17 winners). Projects include:
    • TrustWeave & Ushahidi – local‑language AI safety tools.
    • Silver Guard (Kenya) – misinformation detection.
  • Systemic Alignment: Introduced the Hamburg Sustainability Conference Declaration on Responsible AI to align development actors around capacity, trust, safety, and inclusion.

2. Robert Traeger (Oxford Martin) – Global Governance Blueprint

  • Dual‑Use Tension: Historical analogy to nuclear and aviation regulation – safety standards must be built in tandem with technology diffusion.
  • Technical Safeguards + Co‑Development: Argues for creating safety methods before widespread rollout and for involving global co‑developers.
  • Institutional Models: Cites ICAO, FATF, and similar bodies as templates; suggests AI‑specific “standards bodies” that can enforce consequences (e.g., trade restrictions) for non‑compliance.

3. Saryu Natarajan (Apti Institute) – Agency, Contestability, and Oversight

  • Three Facets of Agency:
    1. Participation – meaningful inclusion of affected communities in AI design.
    2. Contestability – usable grievance mechanisms that are transparent and enforceable.
    3. Oversight – community ability to modify or shut down systems.
  • Power Dynamics: Ignoring power structures yields hollow agency; young men’s vulnerability to fraud illustrated as an example of overlooked risk groups.
  • Data & Labor Agency: Emphasises that workers who generate training data must have agency; AI safety is a socio‑technical problem demanding human‑centric resilience.

Act 3 – From Principles to Practice (Implementation Stories)

1. Amitabh Nag (Digital India – Bhashini)

  • National Language Translation Mission: Built AI pipelines for 36 Indian languages covering ASR, TTS, OCR, and text‑to‑text translation.
  • Scale & Reach: APIs serve ~1.4 billion people; use cases include:
    • Agriculture advisory (20 M+ Marathi‑speaking farmers).
    • Panchayat minute‑taking (audio‑to‑text for local governance).
  • Challenges: Data scarcity for low‑resource languages, dialectal variation, and the need for continuous corpus creation.

2. Deepika Moghlishetti (A‑Step Foundation) – Safety as Agency & Accountability

  • Safety Defined on Two Levels:
    • Individual – safe learning spaces for children.
    • Systemic – alignment with curricula, transparent accountability.
  • Key Question: “Speed for whom? Scale for whom? Safety for whom?” – encourages reframing trade‑offs around who benefits.

3. Nidhi Bhasin (Digital Green) – Community‑Centric AI for Farmers

  • AI‑Based Farmer Chat: 1 M+ users (45 % women), 8 M queries; built with farmers, not for them.
  • Feedback Loop: In‑app feedback, phone‑call verification, and farmer‑selected pilot studies.
  • Safety Mechanisms: Reinforcement Learning from Human Feedback (RLHF) with agronomist oversight; continuous guard‑rail tuning.
  • Impact Metrics: 70 % of Kenyan users act on advice within 30 days; 90 % of Indian users report increased confidence.

4. Nakul Jain (Wadhwani AI Global) – Staged Inclusion in Educational Assessment

  • Oral Reading Fluency Tool: Highlights privacy (raw data never leaves teacher’s device) and humility (model abstains when uncertain).
  • Staged Inclusion: Deploys representative mini‑datasets first, iterates before scaling nationally.
  • Outcome: Emphasises expectations‑management – multiple failure cycles are inevitable before success.

Act 4 – Research Perspectives & Closing Thoughts

1. Aditya Gopalan (IISc) – Operationalising Safety at Scale

  • Current ML Pipeline Critique: “Collect‑data → train → pray for generalisation” leads to emergent, sometimes unsafe, capabilities.
  • Post‑hoc Patch Problem: Safety is often addressed after harmful behavior surfaces, creating a band‑aid approach.
  • Foundational Proposal:
    • Define desired and undesired capabilities a priori (including manipulation over multi‑turn dialogues).
    • Use causal‑inference frameworks to translate safety definitions into measurable metrics.
    • Recognise the global‑south data labor economy: ensure data collection respects local ethics and economic realities.

2. Kalika Bali (Microsoft Research India) – Avoiding a “Global‑South vs. Global‑North” Paradigm

  • Critique of Binary Framing: The Global South encompasses 72 + countries; it is the majority of humanity, not a peripheral “other.”
  • Universal Safety Benchmarks: Calls for culture‑agnostic protocols that can be adapted locally, warning that “translating safety from English to other languages” risks imposing a new form of digital colonialism.
  • Concrete Example: Legal bans on fetal sex determination in India clash with AI tools capable of analyzing ultrasound images; safety must respect such jurisdiction‑specific norms.
  • Final Message: Safety, inclusion, and evaluation must be built for everyone, not as an afterthought for the Global South.

Closing Remarks

  • Moderator thanked all speakers and the audience, emphasizing that safety must be intertwined with sustainable development, agency, and equitable implementation.

Key Takeaways

  • Safety and inclusion are inseparable; AI benefits materialise only when robust safety guarantees exist.
  • Existential risk (human extinction, economic marginalisation) is a real concern if AI development proceeds without coordinated global slowdown and policy oversight.
  • Governance frameworks (UNDP landscape assessments, trust‑and‑safety contests, Oxford‑style standards bodies) are essential to align AI diffusion with safety.
  • Agency must be plural: participation, contestability, and oversight are three pillars for community‑centric AI.
  • Implementation examples (Digital India’s multilingual hub, Digital Green’s farmer chat, AI reading‑fluency tool) demonstrate practical safety mechanisms: data localisation, RLHF, abstention filters, staged roll‑outs.
  • Safety metrics need to be defined before deployment, using causal‑inference and quantitative benchmarks rather than post‑hoc patches.
  • Avoid a Global‑South/Global‑North dichotomy; develop universal safety protocols that can be locally contextualised without imposing digital colonialism.
  • Power dynamics matter – diplomatic pressure, trade leverage, and inclusive governance can compel AI firms and host nations to adopt safer practices.
  • Human‑centric AI (education tutoring, collective decision‑making) offers higher societal returns than profit‑driven advertising models.
  • Continuous feedback loops and community co‑design are the most reliable ways to ensure AI systems remain safe, trustworthy, and beneficial across diverse contexts.

See Also: