Scaling Trusted AI: Global Practices, Local Impact
Abstract
The session opened with a high‑level view of AI’s economic potential for India and the imperative to embed measurable trust into every AI system. Navrina Singh highlighted India’s AI Governance Guidelines, the “seven sutras” framework, and the need to translate policy into operational standards. Credo AI then announced two new resources: a Global Compendium of Contextual AI Use‑Cases and an AI‑Governance Insights Hub, explaining their three pillars (policy tracker, risk taxonomy, and control library) and the plan to make the hub freely available for the Global South.
A moderated panel followed, where representatives from G42, Mastercard, PepsiCo, and PB FinTech described concrete, sector‑specific governance practices, the challenges of localisation, and the emerging job market for AI‑governance professionals. The discussion repeatedly stressed that trust is a market‑access requirement and that responsible AI must be woven into the entire AI lifecycle, not treated as a single checkpoint. The session closed with each panelist stating a concise, actionable vision for scaling trusted AI in India and the broader Global South.
Detailed Summary
- Economic Context – Navrina Singh opened by quantifying AI’s projected contribution to the global economy (US 500 billion). She noted that 47 % of Indian enterprises already have AI in production and 74 % have accelerated AI deployments over the past two years.
- Shift from Capability to Trust – The speaker argued that the defining question for the next AI wave is “who can build AI that is provably trustworthy?” rather than who can build the fastest or most accurate model.
- India’s Unique Position – Citing Aadhaar (digital identity for 1.3 billion people) and UPI (13 billion transactions/month), she described these as trust achievements built on large‑scale, sovereign infrastructure.
- AI Governance Guidelines – The recent Indian AI Governance Guidelines are framed around seven sutras that translate high‑level responsible‑AI principles into actionable pathways for builders, enterprises and institutions. Navrina highlighted that these guidelines are techno‑legal – embedded throughout the AI lifecycle and measured scientifically.
2. Credo AI’s New Offerings
2.1 Global Compendium of Contextual AI Use‑Cases
- After five years of building governance software, Credo AI compiled real‑world use‑case cards (e.g., precision agriculture, fraud detection, healthcare diagnostics).
- The compendium is sector‑agnostic but stresses that AI governance cannot be “peanut‑butter” – each application demands its own impact analysis.
- Contributors mentioned: Mastercard, Autodesk, G42, IAPP, Cisco.
2.2 AI‑Governance Insights Hub (first global hub)
- Purpose: Provide a continuously updated, single source of truth for AI policies, risk controls, models and vendors.
- Three Pillars
- Policy Tracker – aggregates and translates global regulations (EU AI Act, India’s Guidelines, South Korea, RBI, SEC, etc.) into plain language, validated with regulators.
- Risk Taxonomy – defines 16 risk types (e.g., bias, security vulnerability, hallucination) and maps them to policy violations, brand/revenue impact, and societal harm.
- Control Library – pairs each risk with concrete mitigation strategies (human‑in‑the‑loop, technical stress‑tests, etc.).
- Community‑Driven Model – free for the Global South; users can vote on risk‑control mappings and contribute contextual data, creating a shared governance intelligence. QR code and live demo were announced.
2.3 Emphasis on Context
- Navrina repeatedly stressed that context matters – a language model safe for internal employee Q&A could be dangerous providing financial advice to consumers.
- Governance must span design → implementation → production, not a single “airport‑security” checkpoint.
3. Panel Discussion – Operationalising Trust
3.1 G42 (Fabrice Ciais) – Sovereign AI for the Global South
- G42 builds large‑scale data‑center infrastructure (outside the US) and develops frontier models in multiple languages (Indian English, Arabic, Kazakh, etc.).
- Responsible AI is a strategic imperative: data sovereignty, security, and nationwide trust are embedded from the start.
- Example: Digital schools pilot in Azerbaijan – local‑language models developed with extensive testing to ensure trust for education‑sector AI.
3.2 Mastercard (Caroline Louveaux) – Trust as a Business Driver
- Mastercard has used AI for fraud detection and identity verification for decades, now augmenting it with generative AI that boosts detection speed/accuracy by up to 300 %.
- Responsible AI is non‑negotiable: customers choose Mastercard because they trust its AI‑driven services.
- Governance is not a friction; it provides clarity, confidence, and “guard‑rails” that accelerate innovation.
3.3 PepsiCo (Magesh Bagavathi) – Front‑Office Governance
- PepsiCo serves ≈ 1.4 billion daily consumers; AI is used across the entire value chain (farm → manufacturing → retail → consumer).
- Case study: a Mountain Dew marketing campaign flagged potential copyright infringement on mountain images; the responsible‑AI team intervened early, adjusting assets and the app to avoid legal risk.
- Governance is front‑office, not back‑office – it safeguards brand reputation while enabling rapid innovation.
3.4 PB FinTech (Rajiv Gupta) – AI for Financial Inclusion & Health
- PB FinTech handles ≈ 250 million customers yearly across insurance, lending, and newly launched PB Health (hospitals).
- Trust is essential because regulators (RBI, IRDA) and customers demand accountability; AI underpins fraud detection, underwriting, and personalized health‑insurance recommendations.
- AI‑literacy & inclusion: training programs for employees, multilingual AI tools for India’s diverse population, and proactive monitoring of AI usage.
- Example: using AI to identify future senior‑citizen customers now (40‑year‑olds) and design long‑term financial products for them.
3.5 Cross‑Panel Themes
| Theme | Key Points from Panelists |
|---|---|
| Contextualisation | G42 builds language‑specific models; PepsiCo adapts AI for local marketing; Mastercard localises controls via its AI Garage in Pune. |
| Governance Across the Lifecycle | Credo AI’s message echoed: embed governance from data collection to model deployment. |
| Trust as Market Access | Mastercard, G42, PB FinTech all said that trust = contracts – enterprises and regulators will only work with AI that can prove its safety. |
| Emerging Job Market | Navrina highlighted the rise of AI evaluators, governors, trust‑verification specialists; panelists confirmed hiring for these roles. |
| Collaboration & Standards | Need for interoperable benchmarks across regions (India, Europe, Africa) to recognise trusted AI globally. |
| Cyber‑Security Risks | Rajiv warned that AI lowers the barrier for deep‑fake and fraud attacks; calls for cross‑border data‑intelligence sharing. |
4. Audience Q&A Highlights
- AI Agents & Commerce – Caroline asked about the status of agentic AI in India; Navrina responded that Mastercard is building standards for safe, secure agent transactions.
- Retail Lessons for PepsiCo – Navrina queried whether retail‑sector governance insights could be applied to PepsiCo; Magesh described cross‑industry AI tools (e.g., WhatsApp ordering via Meta).
- AI Literacy at PB FinTech – Navrina pressed Rajiv on employee education; Rajiv outlined monitoring AI adoption, multilingual training, and the need to build trust among a largely offline population.
5. Closing Remarks & Vision Statements
| Speaker | Concise Vision (≤ 1 minute) |
|---|---|
| Magesh Bagavathi (PepsiCo) | Leverage India’s sovereign AI infrastructure to ensure fairness, equity, and citizen welfare for 1.4 billion daily consumers. |
| Caroline Louveaux (Mastercard) | Unite public‑private data‑intelligence to stay ahead of AI‑enabled cyber‑crime; break down standards fragmentation. |
| Rajiv Gupta (PB FinTech) | Use AI to improve financial literacy and inclusion, especially for India’s growing senior‑citizen segment, while maintaining regulator‑driven trust. |
| Fabrice Ciais (G42) | Define a responsible‑AI blueprint for the Global South that respects local cultures, safety, and trust, driving widespread adoption. |
| Navrina Singh (Credo AI, Moderator) | Position India as a global leader for responsible AI, creating standards, tools, and a collaborative ecosystem that other emerging markets can emulate. |
The session concluded with applause, a brief memento ceremony, and gratitude expressed to all panelists and the moderator.
Key Takeaways
- Trust is the defining competitive edge for AI in the next wave; it must be measurable, provable, and embedded from design through deployment.
- India’s AI Governance Guidelines (‘seven sutras’) provide a techno‑legal, lifecycle‑wide framework that can be localized for diverse sectors.
- Credo AI’s new resources – the Global Compendium and the AI‑Governance Insights Hub – aim to be the single source of truth for policies, risks, and controls, freely available to the Global South.
- Context matters: language, demographic, regulatory, and cultural differences require localised models and controls (evidenced by G42’s multi‑language frontier models, PepsiCo’s marketing safeguards, Mastercard’s India‑based AI Garage).
- Governance is a strategic, not a compliance, function; it accelerates innovation by providing clear guardrails and building market confidence.
- Responsible AI is a market‑access requirement – banks, fintechs, and insurers will only win contracts if they can demonstrate trustworthy AI.
- Emerging AI‑governance careers (AI evaluators, governors, verification specialists) are already being created; organizations must invest in skilling and professionalisation.
- Collaboration across sectors and borders is essential to combat AI‑enabled cyber‑crime and to develop interoperable standards recognised globally.
- Inclusive AI for the Global South requires blueprints that respect local cultures, data‑sovereignty, and socioeconomic realities, as highlighted by all panelists.
- Actionable next steps suggested by the panel: (1) adopt Credo AI’s compendium and hub, (2) embed governance throughout the AI lifecycle, (3) build context‑specific risk taxonomies, and (4) foster cross‑industry standards to enable responsible AI at scale.
See Also:
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- ai-impact-forum-democratising-ai-resources
- ai-innovators-exchange-accelerating-innovation-through-startup-and-industry-synergy
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- democratizing-ai-resources-in-india
- ai-for-everyone-empowering-people-businesses-and-society
- ai-driven-enforcement-better-governance-through-effective-compliance-services
- democratizing-ai-resources-and-building-inclusive-ai-solutions-for-india
- pathways-for-equitable-ai-compute-access