Responsible AI: From Principles to Practice in Corporate India

Abstract

The session examined how Indian enterprises can move beyond high‑level Responsible‑AI frameworks and embed concrete practices into strategy, governance, and day‑to‑day operations. After a short address by Adobe’s Andy Parsons on the C2PA standard and the need for provable provenance of AI‑generated content, a panel of senior leaders from RPG Group, Adobe, Air India, and NPCI discussed practical governance models, industry‑wide standards, regulatory pressures, and the challenges of scaling responsible AI across large conglomerates, aviation, and national payment systems. The dialogue highlighted the importance of open standards, cross‑industry collaboration, and a balanced regulatory regime, while stressing that responsible AI must be an operational discipline—not merely a compliance checklist.

Detailed Summary

  • Opening remarks – The session was introduced as part of the AI Impact Summit, co‑hosted by Adobe and FICCI. The moderator framed the discussion around moving “from principles to practice” for Responsible AI in corporate India.
  • Andy Parsons – “Content Authenticity as a Living Example”
    • Positioned Responsible AI as an enterprise discipline that will be mandatory by 2026 (EU AI Act enforcement, California AI law, and upcoming Indian AI regulations).
    • Described the C2PA (Coalition for Content Provenance and Authenticity) – an open, royalty‑free standard that attaches a cryptographic “content credential” to every piece of media (image, video, audio).
    • Explained Adobe’s five‑year journey: from a “feature‑add‑on” mindset to baking provenance into the core of tools such as Firefly (generative AI) and Acrobat Assistant (agentic PDF helper).
    • Highlighted three pillars – Transparency, Accountability, Inclusivity – and used the “nutrition label” analogy (a user should know what data, models, and devices created a piece of content).
    • Challenges noted: uneven adoption across platforms, metadata stripping by social networks, low consumer awareness, and the difficulty of building a business case for provenance (it may not be directly profitable but is essential for trust).
    • Stressed that standards must be open, interoperable, and industry‑wide—the same collaborative model that enabled India’s UPI payments infrastructure.

2. Panel Introduction (Moderator: Shantri Mallaya)

  • Shantri welcomed the audience, emphasized India’s leadership role in shaping trustworthy AI, and introduced the four panelists, noting that each represents a distinct industry (conglomerates, creative software, aviation, payments).

3. Perspectives from the Panel

3.1 Amol Deshpande – Governance Across the AI Stack

  • Five‑layer view – AI responsibility must be embedded at every stage (data, model, deployment, monitoring, and governance).
  • Described the evolution from center‑of‑excellence pilots to enterprise‑wide “playground” environments that give business units agility while maintaining guardrails.
  • People focus – Upskilling staff, retaining human oversight (especially as AI moves toward “agentic” capabilities).
  • Process & governance – Introduced “bring‑your‑own‑AI” (BYOA) model: each function can select its tools, but must conform to a template of safeguards (risk assessment, bias checks, audit trails).
  • Highlighted the need for scalable, reusable governance artifacts that can be cascaded through RPG’s diverse verticals (manufacturing, healthcare, agriculture).

3.2 Prativa Mohapatra – Operationalising Responsible AI at Adobe

  • Presented Adobe’s ART framework (Accountability, Responsibility, Transparency) as the core operating model.
  • Firefly example – Every generated asset automatically receives a provenance badge; input data is vetted for licensing, and output is screened for policy compliance.
  • Acrobat Assistant – Demonstrated how an AI‑powered PDF helper respects the provenance of the source documents; it can flag synthetic citations that might mislead legal professionals (citing a Supreme Court anecdote).
  • Stressed cross‑functional integration: legal, compliance, and product teams must all own AI governance; otherwise, responsibility becomes a “check‑box”.
  • Discussed industry challenges:
    1. Metadata loss on social platforms.
    2. Low consumer awareness (many attendees hadn’t recognized the C2PA badge).
    3. Business case tension – provenance may not generate revenue but is essential for brand trust and regulatory compliance.

3.3 Dr Satya Ramaswamy – AI‑Driven Customer Service at Air India

  • Shared Air India’s AA.G – a generative AI virtual assistant launched in May 2023 (first of its kind in the airline industry).
  • Scale & performance – Handled ~13 million queries to date; processes ~40 k queries per day with a 97 % autonomous resolution rate.
  • Safety & risk management
    • Continuous monitoring for jailbreaks, prompt injection, and inappropriate responses.
    • Integrated prompt firewalls and human‑in‑the‑loop feedback (customers rate answer relevance and appropriateness).
    • No incidents of harmful output reported across 200 days of operation.
  • Partnership with Adobe – Leveraged Adobe’s provenance tools to audit content generation and to obtain indemnity for potential liability, reinforcing the “standards‑based risk mitigation” narrative.
  • Emphasised that regulation acts as a catalyst: compliance with global (FAA, EASA, DGCA) and emerging Indian AI guidelines pushes Air India to embed safety from the start, rather than retrofitting later.

3.4 Vishal Kanwat(y) – Fraud Detection & Transparency at NPCI

  • Scale of UPI – Processes billions of transactions daily; AI is essential for real‑time fraud detection.
  • Model approach – Anomaly detection models trained on historic transaction data; focus on low false‑positive rates to avoid degrading user experience.
  • Transparency for users – Deployed a small LLM that explains why a transaction was declined, e.g., “suspicious activity on a new QR code”.
  • Cited Regulatory alignment – NPCI follows RBI guidelines, the DPDP Act, and emerging AI governance frameworks (RBA, METI).
  • Highlighted human‑in‑the‑loop systems: AI flags potential fraud, then a compliance analyst reviews edge cases.
  • Stressed that standardised governance (shared across banks) is vital for a consistent user experience and for maintaining trust in the national payments ecosystem.

3.5 Cross‑Panel Themes & Open Questions

ThemeKey Points Across Panelists
Open Standards & InteroperabilityAdobe’s C2PA, Air India’s partnership with Adobe, NPCI’s industry‑wide models – all stress that open, non‑proprietary frameworks enable scaling.
Governance Model (Central vs Decentralised)Amol advocated “bring‑your‑own‑AI” with enterprise‑wide templates; Prativa emphasized product‑level ART; NPCI showed a hybrid of central models with local analyst review.
Regulation vs Self‑RegulationConsensus that regulation is inevitable (Vishal, Satya) but must be light‑touch to avoid stifling innovation (Amol, Prativa).
MSME InclusionPrativa warned that small firms lack legal/AI teams; suggested that large‑player standards become “public goods” that MSMEs can adopt.
Human‑in‑the‑LoopRepeated across all speakers as a safety net (Air India’s manual override, NPCI’s analyst review, RPG’s governance playbooks).
Metrics for SuccessTransparency (provenance badges), fairness (low false‑positive fraud rates), accountability (audit trails), and user trust (customer satisfaction scores).

4. Closing Remarks

  • Sarika Guliani (FICCI) thanked the panel, reiterated the summit’s theme “People, Planet & Progress”, and promised to carry the dialogue forward into concrete industry initiatives.
  • She highlighted that responsible AI is not a compliance exercise but a commitment to shared human values, and that continued collaboration among corporates, regulators, and standards bodies is essential.

Key Takeaways

  • Responsible AI must be provable, not just promised. Provenance standards like C2PA enable enterprises to demonstrate how AI‑generated content was created.
  • Open, cross‑industry standards are the linchpin for scaling responsible AI across sectors (creative software, aviation, payments, conglomerates).
  • Governance cannot be a single‑size‑fits‑all checklist. Companies need flexible “bring‑your‑own‑AI” frameworks, layered with enterprise‑wide templates, to accommodate diverse business units.
  • Human‑in‑the‑loop oversight remains essential, especially for high‑risk domains (flight safety, fraud detection).
  • Regulation is inevitable but should act as a catalyst, not a roadblock; light‑touch, principle‑aligned rules encourage faster, safer adoption.
  • MSMEs risk being left behind unless large players release open‑source tools, public‑good standards, and shared governance blueprints.
  • Transparency to end‑users (e.g., explaining declined transactions or AI‑generated content) builds trust and satisfies emerging legal obligations.
  • Metrics for responsible AI should include provenance adoption rates, false‑positive fraud rates, audit‑trail completeness, and user‑trust scores.
  • Industry collaboration (e.g., Adobe‑Air India partnership, NPCI’s ecosystem approach) demonstrates that shared standards can solve common safety challenges.
  • The dialogue is ongoing – FICCI pledged to turn these discussions into actionable roadmaps, reinforcing the summit’s broader ambition of “People, Planet & Progress”.

See Also: