Embedded Governance for AI in Finance: An India-First, Risk-Based Policy Dialogue

Abstract

The panel explored how India can design and operationalise embedded, risk‑based governance for artificial‑intelligence (AI) systems used across the financial ecosystem. Participants compared AI governance to existing financial‑market controls, argued for compartmentalisation and “skin‑in‑the‑game” accountability, examined the experimental space offered by the new GIFT IFSC, debated supervisory relief for firms that implement robust AI controls, and discussed cyber‑security challenges posed by generative AI. The session concluded with audience questions on sovereign data assets, sandbox frameworks, and a rapid‑fire round on under‑ and over‑estimated AI risks, followed by closing remarks that framed AI governance as a trust‑building exercise essential to the summit’s “people, planet and progress” theme.

Detailed Summary

The moderator opened with a brief thank‑you and handed the floor to the first panelist (identified as Ajay Kumar Chaudhary).

  • Key point – AI systems, like any other critical financial infrastructure, must operate under an ex‑ante framework that does not wait for a failure to assign responsibility.
  • Analogy to the stock market – The panelist likened AI governance to the established controls in equity markets: mandatory audits, explainability, and shutdown mechanisms when markets destabilise.
  • Compartmentalisation – Just as a firm cannot simultaneously be a bank and a broker (to avoid conflicts of interest), AI applications should be isolated into “compartments” that limit scope, reduce energy consumption, and simplify liability.
  • Skin‑in‑the‑game – Responsibility for AI outcomes must be pre‑assigned (e.g., algorithm designers, data owners, product owners) so that punitive measures can be enforced before damages cascade.

“If you can’t explain your accounts, you can’t be in the stock market. The same logic should apply to AI.”

2. GIF City as an “AI Governance Lab”

Kamath (GIF City, GIFT IFSC) was introduced to discuss the jurisdiction’s potential:

  • GIF City was established in 2015 and regulated as a separate IFSC in 2020, giving it a “clean‑slate” advantage – no legacy constraints and flexibility for experimentation.
  • The sandbox environment allows startups and established firms to pilot AI models under a controlled risk envelope.
  • Growth trajectory – The panelist cautioned that building a financial hub is a long‑term marathon; regulatory frameworks will evolve as the ecosystem reaches critical mass.

“We have legroom to experiment, but we must respect the gestation period any financial centre needs.”

3. RBI/IFSCA View on Supervisory Relief & Safe Harbours

Murlidhar Manchala (RBI) raised a question (later answered by Praveen Kamat, IFSCA) about whether firms that embed robust AI controls—model inventories, bias testing, continuous monitoring—should receive calibrated supervisory relief:

  • Proposed safe‑harbour – Entities that demonstrate full‑cycle governance (risk‑based guardrails, root‑cause analysis, manual overrides) could be treated as “instances” rather than systemic risks, meriting lighter oversight.
  • Transparency to customers – Emphasis on moving from a “black‑box” to a “glass‑box” model, where decisions are understandable to end‑users.
  • Incident‑reporting mechanisms – Formal processes for flagging AI anomalies and invoking manual shutdowns.
  • Incentives – A proposed AI‑in‑Finance award for solutions that serve priority sectors or the bottom‑of‑the‑pyramid, encouraging open‑source sharing and scaling.

4. Cyber‑Security Implications of Generative AI

Vikram (global cloud service provider) addressed the security dimension:

  • Generative AI as an accelerant – While the technology enables faster phishing, credential‑theft, and malicious code generation, it does not fundamentally alter the core security principles.
  • Core controls remain essential – Multi‑factor authentication, strong passwords, patch management, regular vulnerability scanning.
  • Industry standards – Compliance with ISO, NIST, and third‑party attestation reports remains the baseline for banks and cloud providers.
  • Active participation – Regulators and cloud providers must move from passive observers to active participants, integrating AI‑driven automation (e.g., automated scanning, rapid reporting) while up‑skilling staff.

5. Economic‑Policy Perspective & the Need for “Compartmentalised” AI

Sanjeev Sanyal (Economic Advisory Council) stressed the macro‑policy angle:

  • Avoid a bureaucratic, risk‑based regime – AI’s emergent nature requires a creative regulatory stance rather than a rigid checklist.
  • Human‑in‑the‑loop vs. AI‑in‑the‑loop – Emphasised a shift toward “AI in the loop,” using AI to augment human decision‑making, not replace it.
  • Bounded vs. unbounded problems – AI excels at narrowly defined tasks (e.g., chess) but struggles with open‑ended strategic planning; governance should focus on bounded‑problem compartments.
  • Fire‑wall analogy – Just as financial regulators impose “Chinese walls” to prevent conflict of interest, AI systems should have firewalls that isolate emergent behaviours and provide clear “kill‑switches.”

6. Rapid‑Fire & Audience Interaction

6.1. Under‑estimated risk

  • Panel consensus – The greatest under‑estimated risk is the absence of governance itself; without a robust framework the system is vulnerable to unknown failures.

6.2. Over‑estimated risk

  • Vikram argued that fearing a zero‑risk scenario is unrealistic; regulators must focus on nimbleness to adapt as technology evolves.

6.3. Reform priority (Sanyal)

  • Judicial reform – Faster, AI‑aware courts are needed to resolve novel disputes (e.g., liability for AI‑generated outputs, copyright of AI‑created works).

6.4. Audience question – Sovereign data & AI stack (Aditya, Founder, First Type)

  • Key points raised
    • India’s massive data trove (health, consumer behaviour) is a strategic asset.
    • The 2024‑25 budget’s tax holiday for data‑center construction is a step toward “oil‑rigs” for AI.
    • LLMs are only one slice of AI; there is ample space for modest‑budget startups addressing bounded problems.
    • Emphasis on ownership of data rights and creation of “consent‑backed APIs” for regulated data sharing.

6.5. Sandbox & interoperability (Praveen Kamat’s response)

  • Inter‑regulatory sandbox (RBI, SEBI, IFSCA, GIF City) already exists, allowing cross‑jurisdictional pilots.
  • Legal harmonisation – Example: IFSC permits only 16 foreign currencies; domestic RBI permits unrestricted INR‑based transactions. Solutions must respect such jurisdictional constraints.
  • Extended sandbox model – Plans to provide not only regulatory testing but also compute, data, and tooling support for innovators.

7. Closing Remarks

  • Sanjeev Sanyal summarised the trust discussion: AI, unlike UPI, is emergent; it can be trusted only if tightly bounded, auditable, and verified.
  • The moderator thanked the panel, announced the distribution of AI‑Mission mementos, and invited the panelists to stay for a group picture.

Key Takeaways

  • Embedded, risk‑based AI governance must be proactive – assign accountability ex‑ante and embed auditability, explainability, and shutdown mechanisms from the outset.
  • Compartmentalisation is essential – isolated AI “compartments” reduce systemic risk, lower energy consumption, and simplify liability.
  • GIF City (GIFT IFSC) can serve as a sandbox‑based “AI governance lab” thanks to its clean‑slate regulatory status, but it will mature over a long gestation period.
  • Regulators may grant supervisory relief to firms that demonstrate comprehensive AI guardrails, bias testing, continuous monitoring, and transparent incident‑reporting.
  • Cyber‑security fundamentals remain unchanged despite generative AI; organisations must adopt AI‑enabled automation while adhering to ISO/NIST standards.
  • AI should be used for bounded problems; attempts to solve unbounded, strategic tasks risk hallucination and uncontrolled behaviour.
  • Judicial capacity must be upgraded to handle AI‑related disputes (liability, copyright, damages) promptly.
  • India’s sovereign data assets are a strategic resource; policies encouraging data‑centre investment and consent‑backed APIs are crucial for a vibrant AI ecosystem.
  • Inter‑regulatory sandboxes (RBI, SEBI, IFSCA, GIF City) are already in place and will be expanded to include compute and tooling support, but legal harmonisation across jurisdictions remains a key challenge.
  • Rapid‑fire consensus: the biggest under‑estimated risk is the lack of governance; the biggest over‑estimated risk is the belief that AI can be made risk‑free.

Prepared from the verbatim transcript of the panel session at the AI Conference, Delhi, 24 Feb 2026.

See Also: