Boosting AI Adoption in the Global South: Critical Role of Trust Across the AI Tech Stack

Abstract

The panel explored how “trust” can become the linchpin for responsible, large‑scale AI deployment in emerging economies. Speakers examined shared accountability between model creators and end‑users, concrete governance tools (e.g., Singapore’s Model Governance Framework, Anthropic’s “Constitution,” JPMorgan Chase’s audit‑trail and adverse‑action capabilities), and regional coordination mechanisms such as ASEAN working groups and the AI Verify Foundation. The discussion culminated in a rapid Q&A focusing on trust‑by‑design for vulnerable users, multilingual implementation, open‑source testing communities, and the most critical enablers for scaling trustworthy AI in the Global South.

Detailed Summary

SpeakerMain Points
John (unlisted host)Highlighted India’s talent pool and the need to move beyond theory to practical trust‑building across the AI stack. Emphasised that both developers and deployers must be accountable, citing JPMorgan Chase’s 400+ production AI use‑cases.
Terah Lyons (JPMorgan Chase)Stressed “shared and explicit” accountability: creators own model design & limits; deployers own application & outcomes. Noted the regulatory lag behind technology and advocated “govern‑for‑outcomes” rather than technology‑centric rules. Warned against fragmented sovereign AI strategies, urging interoperable standards so an innovation from India can be trusted worldwide.
Rebecca Finlay (moderator)Introduced the panel, framing trust as essential for AI adoption in both Global North and South. Described Partnership on AI’s multi‑stakeholder remit and announced upcoming reports on a “thriving assurance ecosystem.” Summarised three themes from a prior AI Enterprise Forum: (1) governance matters; (2) cross‑value‑chain collaboration; (3) dynamic risk management.

2. Singapore’s Trust‑Building Blueprint

SpeakerKey Contributions
Denise WongDescribed Singapore’s “just‑right guardrails” approach: adaptive, experiment‑ready guidelines that evolve with technology. Cited three frameworks:
1. Model Governance Framework (2019, updated for Generative AI)
2. Agentic AI Framework (latest)
3. AI‑Sure Sandbox – a test‑bed encouraging companies to adopt responsible AI for reputational benefit.
Emphasised humility, agility, and the need for “soft” (guidelines) that become “hard” (standards) over time. Highlighted the AI Verify Foundation as a community where developers, testers, and deployers co‑create standards and share real‑world use‑case evaluations.
Unlisted speaker (John)Re‑introduced the concept of “shared responsibility” across developers, deployers, regulators, civil society, and citizens, echoing Denise’s points.

3. Developer‑Centric Safeguards (Anthropic)

SpeakerMain Points
Ria Strasser‑GalvisAnnounced Anthropic’s new office in Bangalore and the strong uptake of Claude in India (second‑largest user base). Detailed the Claude Constitution – a living, publicly‑available set of guiding principles that (a) categorises safety, ethics, helpfulness, compliance; (b) explains rule origins; and (c) is embedded in model training (“Claude’s soul”). Recent updates adapt the Constitution for “agentic AI.” Emphasised transparency: the Constitution is openly published so users can inspect decision logic.
Maria (unlisted)Briefly referenced the Model Context Protocol (MCP) – an open‑source standard for securely connecting LLMs to external tools and data. Stated that MCP has been handed to the Linux Foundation, ensuring broad, vendor‑neutral adoption.

4. Deployer‑Centric Practices (JPMorgan Chase)

SpeakerHighlights
Terah LyonsDescribed how JPMorgan Chase translates trust into concrete operational controls:
Explainability pipelines that generate human‑readable rationales for model decisions.
Audit‑trail for every deployed model, satisfying both internal governance and external regulators.
Adverse‑action rights – customers can request reasons for credit‑denial decisions, ensuring fairness and legal compliance.
• Emphasised moving from high‑level “principles” to practice‑level safeguards.
Rebecca FinlayNoted that the financial sector’s “practical governance” showcases how outcome‑based regulation can be operationalised.

5. Regional Coordination & Sovereignty

SpeakerCore Insights
Dr. Mangol Serrin (AI Safety Asia)Stressed that trust is higher in the Global South but fragile; governments often lack capacity for AI assurance. Advocated a shared‑responsibility model involving developers, deployers, regulators, civil society, and citizens. Highlighted the need for procurement checklists, monitoring mechanisms, and evaluation frameworks rather than every nation building its own frontier model.
Denise WongOutlined ASEAN’s work: 11 diverse economies coordinating on inclusion, localisation, and multilingual AI. ASEAN’s AI working group produces common policy frameworks while respecting sovereign contexts. Emphasised multilingual technical work (e.g., language‑specific model benchmarks) and policy dialogues.
Ria Strasser‑GalvisConfirmed Anthropic’s focus on Indic‑language benchmarking and partnerships with Indian NGOs/Government to tailor models for agriculture, healthcare, industry. Mentioned upcoming launches in Japan and Europe.
Terah LyonsNoted JPMorgan Chase’s global footprint (100+ countries) and the importance of localised product design and workforce up‑skilling in India (second‑largest hub outside the U.S.). Highlighted corporate philanthropy that funds AI‑skill programmes.

6. Audience Q&A – Trust‑by‑Design, Multilinguality, Open‑Source Testing

QuestionRepresentative Answers
Trust spectrum & vulnerable users (farmer example) – how to embed trust “by design”?Ria: Constitution is continuously updated for local needs (e.g., agriculture benchmarks with NGOs). Terah: Outcome‑based governance; expose model decisions to users; embed explainability and adverse‑action rights. Rebecca (paraphrased): Trust must be built at every ecosystem layer – developer, deployer, regulator, civil society.
Multilingual implementation (India’s many languages)Ria: Anthropic is testing and improving performance on Indic languages; partnership with local actors ensures cultural relevance. Denise: ASEAN’s multilingual work stresses that supporting all languages is a requirement, not an “extra”. Terah: Localisation is core to JPMorgan’s product strategy across markets.
Open‑source and community testingRebecca (Partner on AI): Open‑source models enable characterisation of the value chain.
Ria: MCP now under Linux Foundation – open, vendor‑neutral.
Denise: AI Verify Foundation’s open‑source benchmarks, tools, and AI Assurance Sandbox connect real‑world deployers with global testers; reports focus on methodology, not model “goodness.”
Critical enablers for scaling trustworthy AIDenise: Inclusion, representation, and transparent supply‑chain accountability.
Ria: Open‑source transparency and shared principles.
Terah: Verifiability and clear, outcome‑focused regulations.

7. Closing Remarks

  • Rebecca Finlay thanked the panel and audience, reiterated the upcoming Partnership on AI reports on AI assurance ecosystems, and formally closed the session.

Key Takeaways

  • Shared Accountability is Essential – Trust requires both model creators and end‑users (deployers) to own distinct responsibilities; regulators, civil society, and citizens must also be part of the governance loop.
  • Outcome‑Based Governance Beats Technology‑Centric Rules – JPMorgan Chase’s “govern‑for‑outcomes” approach (explainability, audit‑trails, adverse‑action rights) offers a practical template for regulated industries.
  • Adaptive, Humble Policy‑Making – Singapore’s evolving Model Governance Framework, the “Agentic AI” guidelines, and the AI‑Sure sandbox exemplify a “soft‑first, hard‑later” stance that can keep pace with rapid AI advances.
  • Transparency Through Public Standards – Anthropic’s publicly released Claude Constitution and the open‑source Model Context Protocol (now under the Linux Foundation) provide concrete mechanisms for users to inspect model behaviour.
  • Open‑Source Communities Accelerate Trust – The AI Verify Foundation’s open benchmarks, tools, and Assurance Sandbox enable a global, reproducible methodology for testing AI systems, fostering cross‑border confidence.
  • Multilingual, Localised Solutions Are a Non‑Negotiable Requirement – Panels from ASEAN, Anthropic, and JPMorgan Chase stressed that supporting all regional languages and cultural contexts is essential for adoption in the Global South.
  • Capacity Building for Governments and Workforce – AI Safety Asia highlighted gaps in governmental assurance capacity; JPMorgan’s up‑skilling programmes in India illustrate how private‑sector investment can bridge those gaps.
  • Risk of Fragmentation – Sovereign AI strategies that create isolated standards impede scaling; interoperable, internationally coordinated norms are vital for trust‑driven diffusion of AI innovations.
  • Open‑Source vs. Closed Models – Nuanced Perspective – While open models drive transparency and competition, risk assessment must consider both the model and its intended use‑case; a one‑size‑fits‑all approach is insufficient.
  • Critical Enabler Identified by Panelists – Inclusion and representation, combined with transparent, verifiable supply‑chain mechanisms, emerged as the most decisive lever for building trustworthy AI ecosystems in emerging economies.

See Also: