From Guidelines to Ground: Institutional AI Safety for India and the Global South
Detailed Summary
Arundhati Banerjee opened the round‑table by outlining NVIDIA’s Inception programme – a free, flexible incubator that supports startups (AI‑first or otherwise) with infrastructure, technical guidance, fund‑raising assistance and go‑to‑market (GTM) support.
Key points she made:
- India houses >200 000 startups, of which ≈7 500 are deep‑tech/AI‑first.
- Inception does not use a cohort‑based model; instead it offers long‑run runway (from “day‑zero” to ten years) and graduates companies into other NVIDIA programmes.
- The VC‑Alliance runs in parallel, linking investors with the ecosystem to surface breakthrough ideas.
- The aim is to co‑create impact for India and the world, ensuring that AI‑enabled ventures have the hardware, talent and capital needed to succeed.
Transition: Abdullah (Zephara AI) thanked Arundhati and invited the next speaker.
2. Coalition‑building for AI Safety – The Dialogue
Jameela Sahiba (The Dialogue) described the coalition that serves as the secretariat for the “Responsible Evolution of AI” initiative.
- The coalition aggregates diverse stakeholders (government, academia, industry, civil society) to feed AI‑safety inputs into policy formulation.
- Its work focuses on creating channels for dialogue, producing AI‑safety briefs, and shaping the government’s approach to AI regulation.
Kamesh Shekhar (associate director, The Dialogue) added that the coalition’s work is essentially “shadow‑matching” the needs of large organisations with the capacities of smaller players, emphasizing the need for minimum‑common‑denominator standards and incentive structures that make safety a competitive advantage rather than a compliance burden.
- Key Insight: Safety standards must be value‑creating, not merely a regulatory hurdle; otherwise, startups may be excluded or forced to adopt costly, mis‑aligned practices.
3. Institutional Design Gaps – Dr Arjun Goswami
Dr Arjun Goswami framed the discussion around the “weakest link” in India’s AI governance:
| Weakness | Description |
|---|---|
| Fragmented legal framework | No single horizontal AI law; governance is split across the IT Act, data‑protection rules, sector‑specific rules. |
| Light‑touch regulation | Enforcement bodies (e.g., Data Protection Board) have limited sanctioning power; “soft” oversight may not deter non‑compliance. |
| Top‑down only | Lack of industry‑driven standards or “core regulation” that could act as a first line of defence. |
| Capacity constraints | Regulators lack the GPU/compute infrastructure to audit AI models at scale; verification often relies on good‑faith declarations. |
| Liability ambiguity | Unclear allocation of responsibility among developers, deployers, and end‑users. |
He warned that over‑burdening developers with heavy compliance without matching regulatory capacity can strangle innovation, while under‑regulating leads to unchecked harms.
4. Payments‑Sector Risks – Ravi Aurora (Mastercard)
Ravi Aurora highlighted the financial‑services perspective, where AI‑driven fraud detection and risk scoring operate on millisecond‑scale decisions.
- Trust is central – consumers only adopt digital payments if they trust that transactions are safe and privacy is respected.
- AI models must balance false‑positive rates (which frustrate users) against false‑negatives (which increase fraud).
- Mastercard’s design principles: data ownership, privacy, inclusivity, transparency, social impact.
- Human‑centric design: models must be auditable, bias‑aware, and adaptable to local contexts (e.g., multilingual India).
Audience Q&A: A follow‑up question asked whether institutional failures emerge in high‑adoption settings. Aurora responded that false positives are the most visible failure—if users repeatedly see legitimate transactions blocked, adoption drops sharply.
5. Diplomatic & International Cooperation – Juan Carlos Rojas Arango (Colombia) & Udit Malik (France)
Juan Carlos Rojas Arango (Colombia) framed AI safety as a global‑south cooperation challenge:
- Latin America is developing its own AI capabilities (e.g., Chile’s national AI model).
- Colombia is an early adopter, participating in the GPAI (Global Partnership on AI) and ECD (Economic Cooperation Development) groups.
- Colombia’s energy mix (≈60 % hydroelectric) could host green data‑centres, aligning AI infrastructure with climate goals.
He stressed that regional alliances (e.g., Pacific Alliance) and South‑South collaboration (India ↔ Latin America) are essential to avoid importing unsuitable governance templates.
Udit Malik (France) briefly noted that European‑French perspectives can complement South‑South efforts by providing technical expertise and multilateral frameworks (e.g., UN Digital Compact) while respecting local realities.
6. Coalition‑Level Challenges – Kamesh Shekhar & Jameela Sahiba (Re‑visited)
Kamesh reiterated two “missing pieces” from a coalition viewpoint:
- Incentive structures – why would firms voluntarily adopt safety standards? He suggested market‑based rewards (e.g., “privacy‑first” branding, preferential procurement).
- Least‑common‑denominator approach – standards must accommodate small‑scale actors without over‑burdening them.
Jameela argued that AI safety is a capacity problem, not only a regulatory one. She emphasized the need for procedural and substantive justice:
- Procedural justice – transparent decision‑making mechanisms, stakeholder participation.
- Substantive justice – ensuring benefits reach marginalized groups while risks are mitigated.
She advocated for breaking silos and embedding coalition members at the policy table to ensure inclusive governance.
7. Designing an AI‑Safety Institute – Prof Siddharth Chauhan
Prof Chauhan drew on his work on an AI‑Safety Institute (ASI) for India.
- Incentivised information‑sharing is the core: firms disclose vulnerabilities to the ASI in exchange for liability reduction and trusted‑third‑party verification.
- Global‑South network: ASI should link with international counterparts (UK, US) to share model‑risk assessments without each country needing direct access to frontier models.
- Human‑centric focus: Beyond technical risk, the institute must monitor bias, linguistic exclusion, and societal impact.
- Legal clarity: Define value‑chain responsibilities (developers → deployers → users) and state‑corporate boundaries (e.g., military AI use).
Audience interaction: A participant from Colombia noted that Colombia has already adopted UNESCO AI‑ethics guidelines for the judiciary and is drafting a national AI‑regulation plan (2025). The dialogue highlighted knowledge‑exchange opportunities between India and Colombia.
8. Legal & Governance Frameworks – Prof Chauhan & Dr Goswami (follow‑up)
Prof Chauhan answered a question about accountability standards:
- Institutions should be able to show audit trails, provenance metadata, and impact‑assessment reports that are simple and credible.
- Legal frameworks must embed mandatory origin‑labels (model provenance), third‑party verification, and graded liability (different obligations for startups vs. large firms).
- He highlighted “sovereign AI” – ensuring national data‑centres and datasets are sufficient to avoid reliance on foreign infrastructure.
Dr Goswami added that existing Indian statutes (IT Rules 2023, upcoming 2026 amendments) already require model‑labeling and data‑protection safeguards, but capacity gaps (limited technical expertise, fragmented regulator coordination) hinder enforcement.
- He advocated for single‑window clearances and audit‑market development (certifying bodies that can vet AI systems).
9. Environmental & ESG Concerns – Audience Question to Dr Goswami
An audience member asked about e‑waste generated by AI hardware and green‑AI incentives.
- Dr Goswami identified a “blind spot”: the lifecycle emissions of GPUs, data‑centre cooling, and hardware disposal.
- He suggested policy levers: ‑ mandatory e‑waste reporting, green‑certification for AI‑hardware, and financial incentives (tax credits, ESG‑linked financing) for companies that adopt energy‑efficient models or circular‑economy hardware practices.
10. Stakeholder‑Capitalism & ESG – Audience Question to Arundhati Banerjee
Another participant asked how businesses can be motivated to adopt green AI in a stakeholder‑capitalism paradigm.
- Arundhati argued that ESG considerations are already mandatory for many Indian corporates, and AI‑related ESG metrics (e.g., carbon intensity per inference, responsible data use) can be built into vendor contracts and investment criteria.
- She highlighted the need for transparent reporting and third‑party audits that link AI safety with environmental performance, turning sustainability into a market differentiator rather than a cost.
11. UNESCO Guidelines in Colombia – Audience Follow‑up
A question about Colombia’s adoption of UNESCO AI guidelines for courts raised the issue of remaining gaps in judicial AI.
- Juan Carlos Rojas Arango noted that court‑level AI currently handles document classification and predictive analytics, but issues remain in bias mitigation, explainability, and accountability for judgements.
- He emphasised the need for continuous monitoring, human‑in‑the‑loop safeguards, and public‑access to audit logs to ensure fairness.
12. China’s AI Governance Model – Audience Question
An audience member asked about China’s “digital exceptionalism”—treating cyberspace as sovereign territory—and its relevance for Global‑South nations.
-
The speaker (unnamed, responding on behalf of the panel) summarised:
- China operates an Algorithmic Registration System that forces companies to disclose model details.
- Positive: Gradual capacity‑building and incremental regulatory learning.
- Negative: Discretionary enforcement that can be used for political control, limiting civil‑rights protections.
-
For India and other Global‑South countries, the takeaway is to balance state‑led oversight with independent, industry‑driven audits, ensuring civil‑society safeguards.
13. Regulator’s Role & Enforcement – Audience Question
A regulator (self‑identified) asked about enforcement posture beyond sandboxes and audits.
-
Ravi Aurora explained that financial‑sector regulators (RBI, SEBI, Telecom‑Regulatory Authority) are issuing advisories on AI bias, but no heavy‑handed punitive regime yet.
-
Prof Chauhan suggested a tiered enforcement model:
- Pre‑deployment sandbox/impact‑assessment (mandatory for high‑risk systems).
- Periodic independent audits with penalties for non‑compliance (e.g., fines, license suspension).
- Public transparency portals where regulators publish audit outcomes to build citizen trust.
-
The consensus: Regulators must act as facilitators and enforcers, providing clear guidance, supporting capacity‑building, and ensuring accountability mechanisms are visible to the public.
14. Closing Remarks & Final Q & A
The session wrapped up with a rapid round of audience questions on:
- E‑waste incentives and ESG‑linked procurement (answered by Dr Goswami & Arundhati).
- Future of UNESCO‑guided AI in courts (answered by Juan Carlos).
- China’s model and digital exceptionalism (summarised as above).
Abdullah thanked all panelists and participants, emphasizing that the collective insights will feed into the Institutional Safety‑Case Standard and Red‑Team‑ing Checklist that the round‑table aims to publish.
Key Takeaways
- Fragmented legal architecture in India (and many Global‑South nations) creates a weakest link; a horizontal AI law or coordinated “AI‑Safety Institute” is needed to unify standards.
- Capacity gaps (compute, expertise, audit infrastructure) limit regulators’ ability to verify AI systems; incentivised information‑sharing and third‑party audits can bridge this gap.
- Liability must be clearly allocated along the AI value chain (developers → deployers → users) with graded obligations for startups vs. large enterprises.
- Financial‑sector AI hinges on trust and low false‑positive rates; continuous model monitoring and human‑centric design are essential for consumer adoption.
- Multi‑stakeholder coalitions (e.g., The Dialogue) are crucial to inject procedural and substantive justice into AI governance, ensuring that policies reflect the needs of marginalized groups.
- South‑South diplomatic cooperation (India ↔ Colombia, Pacific Alliance) can foster shared resources, green data‑centre capacity, and regional AI‑safety networks.
- Environmental sustainability (e‑waste, energy consumption) must be integrated into AI‑safety standards through mandatory reporting, green certifications, and ESG‑linked incentives.
- China’s incremental algorithm‑registration offers a lesson in capacity‑building, but its discretionary enforcement warns against over‑centralisation that may harm civil liberties.
- Regulators should adopt a tiered enforcement model: pre‑deployment sandbox/impact‑assessment, periodic independent audits, and transparent public reporting to build trust.
- Practices for AI builders (as highlighted by NVIDIA) include: continuous evaluation, infrastructure‑aware guardrails, operationally tested governance, auditability, and clear answers to three questions – what problem does the AI solve? who is accountable if it fails? what happens when the model degrades?
These insights collectively shape a pragmatic pathway from high‑level AI‑safety principles to ground‑level institutional guardrails suitable for India and other Global‑South economies.
See Also:
- ai-innovators-exchange-accelerating-innovation-through-startup-and-industry-synergy
- founders-funders-the-india-ai-capital-ecosystem
- ai-for-everyone-empowering-people-businesses-and-society
- ai-for-all-catalysing-jobs-growth-and-opportunity
- ai-for-economic-growth-and-social-good-ai-for-all-driving-economic-advancement-and-societal-well-being
- ai-for-inclusive-societal-development
- sovereign-ai-for-india-designing-the-nations-future-compute-data-and-innovation-ecosystem
- scaling-innovation-building-a-robust-ai-startup-ecosystem
- empowering-the-next-generation-of-entrepreneurs-in-asia-and-india-through-ai
- ai-for-industries-resilience-innovation-and-efficiency