Multi-Stakeholder Collaboration to Foster AI Adoption in the Global South

Abstract

The panel explored how governments, industry, academia and civil‑society can jointly accelerate responsible AI uptake across the Global South. Panelists shared concrete examples—UN‑level governance work, Google’s weather‑forecasting and African‑language initiatives, data‑ownership and licensing debates, low‑bandwidth model deployment, finance‑sector trust models and education pathways. The discussion repeatedly highlighted the need for equitable power‑sharing, context‑aware evaluation, and capacity‑building to ensure that AI benefits are inclusive rather than extractive.

Detailed Summary

  • Moderator: Heyman Pandey (CEO, Center for Responsible AI, Madras) introduced himself unexpectedly as both host and moderator due to a last‑minute change.
  • He framed the session around “responsible and inclusive AI adoption in the Global South” and emphasized the diversity of the panel (government advisors, industry leaders, academia, non‑profits).
  • The moderator announced a round‑robin format: each panelist would have 2–3 minutes to summarise their work, focusing on stakeholder collaboration and key challenges.

2. Panelist Introductions

PanelistBrief Bio (as read by moderator)
Dame Wendy HallDBE, FRS, Regius Professor of Computer Science, Associate VP International Engagement, Director, Web Science Institute, University of Southampton. Former UK‑government AI adviser; member of UN High‑Level Advisory Board on AI.
Dr Aisha Walcott‑BryantSenior Staff Research Scientist, Head of Google Research Africa (offices in Accra & Nairobi). Leads AI for climate, health, education, food security; champion of African‑led research and open‑source tools.
Prof Vukosi MarivateProfessor of Computer Science, ABSA UP Chair of Data Science, University of Pretoria. Works on machine‑learning for low‑resource languages, equitable licensing, and social‑impact data‑science.
Safiya HussainChief Impact Officer & Co‑founder, CARIA (humanitarian AI non‑profit). Background in Central‑Asia humanitarian work, UN consulting, impact‑evaluation of AI projects.
Dr Makran TapaswiPrincipal ML Scientist, Wadhwani AI; Assistant Professor, CVIT, IIIT Hyderabad. Focus on AI for health, education and agriculture; practitioner of low‑resource, offline model deployment.
Dr Niyati ChhayaCo‑founder & AI Lead, HyperBots; former Adobe Research scientist. Works on multimodal NLP, open‑source speech models, teaching at IIIT Hyderabad & ISB.
Finance‑AI Founder (name not captured)Founder of a finance‑AI startup; emphasizes trust, data privacy, and regulatory alignment for B2B AI solutions.

3. Round‑Robin Summaries

3.1 Dame Wendy Hall – Global Governance & UN Advisory Work

  • UN High‑Level Advisory Board on AI (40 members, half from the Global South).
  • Produced the “Governing AI for Humanity” report (Sept 2024) presented at the UN General Assembly; stresses “AI for everybody” (inclusive, not just wealthy nations).
  • Highlights:
    • Need for global scientific board (recently recommended).
    • Global dialogue on AI policy slated for the AI for Good conference (July 2025).
    • Emphasises inclusion of China, the US, and Global‑South voices to prevent a West‑centric governance model.
  • Calls on panelists to continue supporting multilateral, cross‑regional standards.

3.2 Dr Aisha Walcott‑Bryant – Google Research Africa

  • Mission: “AI for Africa, by Africans” – research, innovation and impact measured rigorously.
  • Two flagship collaborations:
    1. Weather‑Nowcasting: Partnered with national meteorological agencies; leveraged satellite data + a global model to provide 5 km precipitation forecasts across Africa (where only ∼37 radar stations exist). Delivered via Google Search, enabling farmers & disaster responders to access hyper‑local forecasts.
    2. African‑Language Speech Dataset (Wahal NLP): Co‑created with Makerere University (Uganda), Digital Umuganda and others; collected 11 k hours of speech data for low‑resource African languages, released open‑source on Hugging Face for ASR & TTS. Emphasised data ownership for local communities.
  • Framework: From research → product → impact; projects are shipped only when measurable impact pathways exist (e.g., climate, health, education).

3.3 Prof Vukosi Marivate – Low‑Resource Language & Equitable Licensing

  • Problem: Africa has 2–3 k languages; many are “low‑resource” digitally yet have millions of speakers.
  • Approach: Combine technical ML work with social‑technical considerations – legal frameworks, community expectations, and equitable licensing that lets language communities retain ownership and benefit from models built on their data.
  • Collaboration Model: Works with government, industry (Google, Wadhwani AI) and civil‑society to set licensing terms that prevent extractive practices.

3.4 Safiya Hussain – Impact‑Centric Data Governance

  • Highlights the four “power‑house” data‑collection nations (India, Kenya, Uganda, Philippines) because of English proficiency and existing pipelines.
  • Emerging trend: Growing national budgets for local‑language data collection and stress‑testing of models to ensure contextual relevance.
  • Key warning: Avoid extractive data pipelines—data contributors must obtain ownership or benefit‑sharing. Calls for new governance structures to define “data ownership” and “model ownership” in the Global South.

3.5 Dr Makran Tapaswi – Offline, Low‑Bandwidth AI

  • Use‑case: Newborn anthropometry – estimating infant weight from video using computer vision.
  • Technical solution: Model compression enabling inference on low‑cost smartphones (₹8,000‑10,000) offline, bypassing unreliable internet.
  • Scalability: Same approach can be applied to agriculture (e.g., pest detection) and education (e.g., mobile‑based tutoring).
  • Stresses data‑sourcing (community‑driven) and multi‑stakeholder pipelines from field workers → model builders → end‑users.

3.6 Dr Niyati Chhaya – Open‑Weight Multimodal Models

  • Co‑founded HyperBots, focusing on multimodal NLP (text + speech + vision).
  • Emphasises open‑weight models (e.g., Gemma, MedGemma) that can be fine‑tuned on domain‑specific datasets (e.g., AfroMed QA, a 25k Q&A set from 16 African countries).
  • Demonstrates how open repositories empower local researchers to adapt global models to regional medical contexts without building from scratch.

3.7 Finance‑AI Founder – Trust, Data Privacy & Policy Inclusion

  • Describes a B2B finance‑AI startup that must convince conservative CFOs to adopt AI.
  • Highlights three stakeholder layers:
    1. Data owners (clients) → need secure, anonymised data pipelines.
    2. AI builders → must guarantee model robustness and compliance.
    3. Regulators → need clear policy pathways for startups to be included in national AI strategies.
  • Calls for formal channels that let small innovators participate in policy‑making alongside larger corporations and governments.

4. Thematic Deep‑Dive

ThemeKey Points & Illustrations
Multi‑Stakeholder CollaborationPanelists repeatedly cited co‑design, shared data ownership, and joint governance as essential. Examples: UN Advisory Board, Google‑MET agency partnerships, Wadhwani AI’s offline model pipeline.
Data Collection & OwnershipSafiya warned against “extractive” pipelines; Vukosi and Aisha stressed community‑owned datasets (e.g., Wahal NLP, AfroMed QA). Proposed equitable licensing frameworks.
Model Deployment in Low‑Resource SettingsMakran’s offline smartphones; Aisha’s weather‑nowcasting that works without dense radar networks; emphasis on model compression and edge inference.
Evaluation & Impact MeasurementMultiple speakers (Safiya, Vukosi, audience) argued that one‑size‑fits‑all evaluation is impossible. Need for participatory, context‑specific metrics that capture social impact, not just accuracy.
Finance & TrustFinance‑AI founder highlighted the trust‑building loop: secure data handling → model reliability → regulatory acceptance → market uptake.
Education & Workforce PreparationAudience questions centred on preparing undergraduate and graduate students for an AI‑augmented job market; panelists advocated critical thinking, ethics, interdisciplinary collaboration, and early exposure to real‑world impact projects.
Digital DivideA later attempt to ask Wendy about bridging the divide resulted in a brief discussion: the consensus was education plus multi‑stakeholder outreach (government, NGOs, industry) is the primary lever.
Future DirectionsPanelists highlighted upcoming global AI dialogue (AI for Good 2025), new open‑source medical QA datasets, and national budgeting for language data as concrete steps forward.

5. Audience Q&A

QuestionRespondent(s)Core Takeaway
How should impact evaluation be structured across different contexts?Safiya Hussain, Vukosi Marivate, audienceNo universal baseline; evaluation must be participatory, context‑specific, and capture social‑cultural nuances.
How to address the digital divide within the Global South?Brief attempt with Wendy Hall (unclear), audienceConsensus: Education and collaborative outreach are essential; technical infrastructure alone isn’t enough.
What skills do graduating CS students need in an AI‑driven job market?Various panelists (implicit)Emphasise critical thinking, ethics, interdisciplinary collaboration, and ability to question own work.
How can small AI startups influence policy?Finance‑AI founderEstablish formal policy‑inclusion mechanisms, showcase trust‑building case studies, and engage regulators early.

6. Closing Remarks

  • Moderator thanked panelists for “showcasing the power of collaboration across academia, industry, government and civil society.”
  • He reiterated that responsible AI in the Global South hinges on shared ownership, context‑aware evaluation, and education.
  • The session concluded with a brief appreciation note and an invitation to continue the conversation during the conference’s networking break.

Key Takeaways

  • Inclusive Global Governance: The UN High‑Level Advisory Board on AI (with strong Global‑South representation) is shaping a “AI for everybody” policy framework; a global scientific board and AI dialogue are upcoming milestones.
  • Community‑Owned Data: Projects like Wahal NLP and AfroMed QA demonstrate that open‑source, community‑controlled datasets enable equitable AI development and avoid extractive practices.
  • Multi‑Stakeholder Partnerships: Successful initiatives (weather‑nowcasting, newborn anthropometry) combine government agencies, industry research labs, and local data collectors to bridge resource gaps.
  • Low‑Bandwidth Deployment: Model compression and offline inference on cheap smartphones make AI usable where internet connectivity is unreliable, expanding impact in health, agriculture and education.
  • Equitable Licensing & Ownership: Legal and ethical frameworks must grant data and model ownership to the communities that generate them, preventing exploitation.
  • Evaluation Must Be Contextual: Impact measurement should be participatory, aligning metrics with local social, cultural, and sectoral realities; a single global benchmark is infeasible.
  • Trust in High‑Risk Sectors: Finance‑AI startups illustrate that secure data pipelines, regulatory engagement, and transparent model performance are prerequisites for adoption in conservative industries.
  • Education as the Bridge: Scaling AI responsibly requires early education on AI fundamentals, ethics, and critical thinking for both students and the broader public.
  • Power Dynamics Need Redressing: Across the panel, the recurring theme is to flatten power asymmetries between Global‑North model builders and Global‑South data contributors.
  • Future Outlook: Continued investment in national language‑data initiatives, global policy dialogues, and open‑weight model ecosystems will sustain collaborative AI growth across the Global South.

See Also: