Shaping Secure, Ethical and Accountable AI Systems for a Shared Future

Abstract

The panel examined how India and its private‑sector partners can build AI systems that are secure, ethical, transparent and accountable for 1.4 billion citizens and the global community. Four dilemmas were framed—innovation vs regulation, sovereignty vs interoperability, claim vs verification, and the role of governance as a “trust accelerator.” Panelists shared concrete experiences: licensing AI rights for cultural icons, scaling enterprise AI responsibly, building confidential‑compute clouds, confronting AI‑enabled cyber‑attacks, and codifying India’s AI‑mission governance pillars. The session closed with a rapid “one‑word” poll on what is essential for citizen trust in AI, followed by a brief felicitation ceremony.

Detailed Summary

The moderator opened by stressing that AI is moving from “proof‑of‑concept” experiments to core infrastructure that touches identity, finance, health and public services. Four overarching tensions were introduced:

  1. Innovation ↔ Regulation – can government act as a trust accelerator rather than a bureaucratic brake?
  2. Sovereignty ↔ Interoperability – how to honor cultural “sutras” while keeping AI systems globally interoperable?
  3. Claim ↔ Verification – moving from “trust‑me” statements to provable security and auditability.
  4. Trust ↔ Transparency – embedding provenance, audit trails and responsible‑use checks throughout the AI lifecycle.

The moderator noted that the panel “represents the entire lifecycle of the challenge – from the architects of the AI mission to the leaders who scale the technology across the world’s largest enterprises and creative frontiers.”

2. Panelist Introductions

SpeakerCore Perspective Introduced
Mr Abinav Varma KalidindiWorks at the intersection of culture and technology; Ikonz Studios holds exclusive AI‑rights for iconic Indian personalities (e.g., Amitabh Bachchan). The studio creates digital avatars that are the sole legal source for voice, likeness, and visual representation – a 100‑year custodianship of cultural heritage.
Mr Ajit Kumar25 years of technology leadership (Accenture → HCL). Emphasises the shift from POCs to large‑scale AI deployments. Highlights need for probabilistic‑outcome models that can recognise their own limits and communicate uncertainty. Calls for standards, processes, organisational structures, and embedded engineering controls to assure trust, security and privacy.
Ms Charu SrinivasanEngineer at Microsoft building Azure infrastructure that runs AI workloads. Focuses on confidential compute and “zero‑trust” models where even the cloud operator cannot see the data. Explains “confidential clean rooms” that enable multiple parties to compute on shared data without leakage, providing a techno‑legal guarantee of privacy.
Dr Monisha (Mandiant/Google Cloud)Security‑consulting leader who describes Mandiant’s role as first‑response cyber‑forensics and its integration with Google Cloud. Stresses that AI‑driven adversaries can now automate zero‑day discovery, use “agentic shadow AI,” and launch attacks at a speed that outpaces traditional defenses.
Dr Madan OberoiLead architect of the India AI Mission and former senior police officer. Outlines seven governance sutras (trust, public‑good, innovation, fairness, inclusivity, security‑by‑design, sustainability) that guide AI policy, the Digital Public Infrastructure (DPI), and the Digital Personal Data Protection (DPDP) Act. Highlights the AI‑Bhashini language engine (36 dialects) and the AI Safety Institute for end‑to‑end safety checks.
Mr Alkesh K SharmaRetired IAS officer; provides the government perspective on balancing sovereign AI ambitions with the need for interoperable, resilient systems. Cites the AI Mission’s transition from a small pilot to a multi‑billion‑rupee national programme after a Cabinet directive.
Mr C.P. GurnaniFounder of IONOS and champion of identity‑protection. Introduces UniProtect, a service that secures digital identities, data and provides rapid recovery from cyber‑terror attacks. Argues that trust must be built into the product stack and that security must evolve alongside AI innovation.
Mr Harsh MalhotraGovernment liaison who echoed the need for public‑sector leadership in setting security, trust, and resilience as functional necessities rather than abstract ideals.

3. Thematic Deep‑Dive

3.1 Innovation vs. Regulation – “Trust Accelerator”

  • Ajit Kumar argued that responsible AI must be baked into the development pipeline. He identified four levers:

    1. Standards – clearly defined, measurable security, privacy and responsible‑use criteria.
    2. Process – lifecycle governance that checks those standards from ideation onward.
    3. Organisational Structures – dedicated teams that can veto a model that does not meet the criteria before production deployment.
    4. Embedded Technology – automated enforcement (e.g., model‑card generation, continuous compliance checks) that removes reliance on manual checklists.
  • Gurnani added that industry can treat regulation as a catalyst by aligning product roadmaps to meet trust expectations, using identity‑protection as a prototype for security‑by‑design.

  • Oberoi highlighted how India’s policy framework (principle‑based, not prescriptive) aims to avoid stifling innovation while mandating trust, safety, and resilience as non‑negotiable design pillars.

3.2 Sovereignty vs. Interoperability – “Cultural Sutras”

  • Varma illustrated a cultural‑ownership model: Ikonz Studios licenses AI rights to iconic figures, ensuring that provenance, authenticity and cultural sensitivity are built into each avatar. The licence includes a “triple‑approval workflow” (icon’s team, studio, and the client) to guarantee legal accountability.

  • Oberoi described the seven AI‑mission sutras that embed cultural inclusivity: AI‑Bhashini now supports 36 Indian dialects, ensuring that language does not become a barrier to AI benefits.

  • Monisha warned that sovereign AI models must be securely trained on locally sourced data, otherwise they risk leaking sensitive national‑level information.

3.3 Claim vs. Verification – Provenance & Auditability

  • Varma emphasized digital‑file provenance: every generated asset must carry a cryptographic stamp indicating who created it, when, and under what constraints. This combats deep‑fake concerns and establishes legal traceability.

  • Srinivasan presented confidential‑compute clean rooms that produce attestation reports—cryptographically‑signed evidence that data remained inside a secure enclave. This satisfies both technical auditors and regulators.

  • Monisha highlighted the unseen threat of AI‑enabled attackers who can rapidly discover zero‑days and launch “shadow‑AI” agents. She argued that auditability alone is insufficient without proactive threat‑modelling and continuous red‑team testing.

  • The moderator’s follow‑up asked how to embed transparency into the AI lifecycle. Srinivasan replied with the clean‑room example, while Ajit Kumar reiterated that processes and tooling must enforce transparency from the very first design sketch.

3.4 Collective Responsibility – Ecosystem Coordination

  • Monisha posited that responsibility is shared among model developers, data providers, and end‑users. She warned of representation bias (e.g., disease models that ignore regional genetic diversity).

  • Oberoi noted the AI Safety Institute (government‑established) that will audit from concept to commissioning, ensuring the four pillars (trust, safety, resilience, sustainability) are met.

  • Gurnani described UNIPROTECT, which offers identity‑insurance and rapid recovery, underscoring that protecting citizens also means preparing for breach fallout.

4. Audience Q&A – Key Questions & Answers

Question (paraphrased)Speaker(s) & Core Answer
How can we ensure democratic values and cultural sensitivities in digital personas?Varma – provenance & triple‑approval workflow; restrict creation to sovereign, multilingual models; limit exposure until sovereign models mature.
How do we build transparency into the AI lifecycle so that claims are verifiable?Srinivasan – confidential clean rooms, attestation reports; Ajit Kumar – embed standards & checks from ideation; Monisha – combine policy, techno‑legal contracts, and immutable ledgers.
What unseen security threats are emerging in 2026?Monisha – AI‑driven automation of zero‑day discovery; “agentic shadow AI” that creates back‑doors; need for continuous adversary emulation.
What is collective responsibility beyond consortium members?Monisha – developers must embed fairness, inclusion, privacy; Oberoi – national policy must enforce principle‑based governance; Gurnani – identity‑protection as a service to the broader public.
How do we make AI systems auditable and cryptographically trustworthy?Srinivasan – clean‑room attestations; Varma – cryptographic provenance of each digital asset; Oberoi – AI‑Safety Institute’s end‑to‑end audit.

5. Rapid‑Fire “One‑Word” Round (Trust‑Element)

The moderator asked each panelist to name a single word they deem essential for citizens to trust AI. The responses were:

SpeakerOne‑Word Answer
Ajit KumarTransparency
Charu SrinivasanAccessibility
C.P. GurnaniConfidence
Alkesh K SharmaEthical
Harsh MalhotraPrivacy‑preserving
Madan OberoiGovernance

These succinctly echoed the panel’s broader themes.

6. Closing & Felicitation

The session concluded with a brief felicitation ceremony where senior officials (including CA Neeru Abrol and Dr Sandeep Marwa) presented shaws and a planter to the panelists as a token of appreciation. The moderator thanked everyone, reminded attendees of the next session at 5:30 p.m., and asked the audience to clear the hall for the following program.

Key Takeaways

  • **Responsible AI must be embedded from the earliest design stage, not added as an after‑thought. Standards, processes, organisational oversight and automated tooling together form a “trust accelerator.”
  • Cultural ownership and provenance are critical when AI recreates iconic personalities; a triple‑approval workflow and cryptographic stamps can ensure legal accountability.
  • Confidential‑compute clean rooms provide a practical way to guarantee privacy, auditability and legal admissibility for multi‑party AI collaborations.
  • AI‑enabled adversaries are now able to automate sophisticated attacks (zero‑day discovery, shadow agents); defenders need continuous red‑team testing, threat‑modelling and rapid‑recovery services.
  • India’s AI Mission is built on seven “sutras” (trust, public good, innovation, fairness, inclusivity, security‑by‑design, sustainability) and operationalised through the AI Safety Institute, AI‑Bhashini (36 languages), and DEPA frameworks for consent‑driven data sharing.
  • Collective responsibility spans model creators, data providers, regulators and end‑users; fairness and representation must be validated across India’s diverse demographic landscape.
  • The one‑word poll distilled the consensus: Transparency, Accessibility, Confidence, Ethical, Privacy‑preserving, Governance—all essential pillars for citizen‑level trust in AI.

These insights illustrate how policymakers, technologists and industry leaders can jointly shape AI systems that are secure, ethical, transparent and accountable, while still fostering the rapid innovation required for a shared, AI‑driven future.

See Also: