Safe and Trusted AI: Standardization in the age of LLMs, Generative and Agentic AI
Detailed Summary
- Moderator (Reena Garg) welcomed participants to the AI Impact Summit (July 2026, New Delhi) and framed the central question: How can AI be deployed responsibly as it moves from research labs into governance, industry, healthcare, finance, and everyday decisions?
- Emphasised that the focus is on standardization as the bridge between ethical principles, regulatory frameworks, and trustworthy AI systems.
2. Defining “Trusted AI” – First Panelist (5:00‑15:00)
| Speaker | Key Points |
|---|---|
| Dr. Sridhar Chimalakonda | • Trust is multidimensional – varies by domain, stakeholder, and system autonomy (e.g., a web‑browser vs. an autonomous weapon). • Existing standards (ISO SC‑42) enumerate ~12 trust properties (reliability, safety, etc.) but are principle‑based, not sector‑specific. • Current frameworks lack operational guidance: concrete metrics, measurement methods, and accountability allocations for multi‑agent software. • Example challenge: If a software generated by 20 AI agents fails, which agent bears responsibility? • Calls for deeper, scenario‑specific definitions of trust and sector‑tailored standards. |
3. International Standard‑Setting Landscape (15:00‑30:00)
| Speaker | Key Points |
|---|---|
| Mr. Rohit Israni | • International AI standardisation began around 2015‑2017 (ISO‑IEC collaboration). • JTC‑1/SC‑42 (horizontal committee) has produced foundational work: common vocabularies, data standards, testing methods. • ISO IEC 42001 (certifiable AI Management System standard) released ~1.5 years ago – the first certifiable AI standard. • Highlights the rapid evolution of AI models (quarterly upgrades) as a major challenge for standards keeping pace. |
| Mr. Tim McGarr | • Certification landscape is already active: major General‑Purpose AI providers hold ISO IEC 42001 certifications. • Certification must dovetail with existing cybersecurity (ISO 27001) and privacy (ISO 27701) standards. • Emphasises that assurance frameworks can adapt to AI’s pace if they integrate with well‑established security/privacy standards. |
| Ms. Gayathri Ekambaram | • Generative AI introduces new risk vectors: deep‑fakes, IP infringement, hallucinations (confidently fabricated facts). • Agentic AI adds operational risks: autonomous API calls, irreversible transactions, and execution of AI‑driven actions. • TCS’s “Safety Tenets” (Secure, Accountable, Fair, Transparent, Identity‑protected) form a corporate responsible‑AI framework. |
| Mr. Abhishek Aggarwal | • Indian AI Governance Framework (released Nov 2025) sets high‑level objectives (accountability, explainability, data access). • Standards translate these objectives into implementable technical processes (dataset documentation, model validation, lifecycle management). • Standards provide consistency, comparability, and transparency across sectors such as healthcare and agriculture. • Emphasises that standards are the “operational backbone” for governance, defining who is responsible across the AI value chain. |
4. The Co‑evolution of Governance & Standards (30:00‑45:00)
| Speaker | Key Points |
|---|---|
| Dr. Sridhar Chimalakonda | • Current Indian guidelines are suggestive, not enforceable – risk of innovation‑harm trade‑off. • Proposes a stage‑wise approach: enforce standards in high‑risk sectors, retain guidance‑only in low‑risk domains. • Calls for privacy‑verification tools (e.g., DPDP compliance checks) to certify AI services. |
| Mr. Rohit Israni | • External forums (OECD, ML Commons) feed pre‑standardization insight: – OECD’s AI Incident Monitoring Framework now a SC‑42 work item. – ML Commons supplies benchmark suites (AI Illuminate, jailbreak) for red‑team testing. • Such consortia accelerate standard development by providing concrete testing artefacts. |
| Mr. Tim McGarr | • Standards must continue to evolve with LLM and generative AI innovations. • Emphasises liaison work (≈90 liaisons) to import emerging ideas into the standards community. |
| Ms. Gayathri Ekambaram | • Existing standards give a solid foundation but must be extended for Gen‑AI & agentic AI. • SC‑42 is focusing on Agentic AI (TKA) standards: interoperability, orchestration, resilience. |
| Mr. Abhishek Aggarwal | • Standards complement regulations: regulations set what must be achieved; standards define how. • In India’s pro‑innovation stance, standards ensure AI systems are globally competitive while meeting national priorities (accountability, explainability, data access). |
5. Concrete Recommendations & “One Action” Question (45:00‑55:00)
| Speaker | Recommendation |
|---|---|
| Tim McGarr | Adopt ISO IEC 42001 (AI Management System) and pursue certification; leverage existing cybersecurity/privacy standards for assurance. |
| Gayathri Ekambaram | Integrate TCS’s “Safety Tenets” throughout the product lifecycle; treat standards as organic components, not external add‑ons. |
| Rohit Israni | Use ISO IEC 42007 (proposed conformity‑assessment scheme) for sector‑specific testing beyond 42001. |
| Sridhar Chimalakonda | Move from static to continuous AI assessment; develop labeling/benchmarking mechanisms to indicate compliance status. |
| Abhishek Aggarwal | Implement “trust‑by‑design” across lifecycle stages (design → deployment → post‑deployment audit) and develop toolkits (watermarking, defect detection, machine unlearning). |
| Moderator (Reena Garg) | Summarised the shared “one concrete action”: adopt a continuous, lifecycle‑wide trust‑by‑design approach anchored by ISO IEC 42001/42007, complemented by sector‑specific extensions and real‑time labeling of AI products. |
6. Audience Interaction – Student Question (55:00‑58:00)
- Ankita (M.Tech Cyber‑Security student) asked about the need for configurable, context‑specific standards given India’s linguistic and sectoral diversity, and the forthcoming impact of quantum computing.
- Sridhar responded that a framework allowing modular configuration of existing standards to suit domain‑specific risk profiles is essential, and that sector‑level regulators (RBI, SEBI, CCI) already produce tailored guidelines.
7. Closing Remarks, Mementos & Vote of Thanks (58:00‑End)
- Moderator thanked panelists and audience, emphasizing the need for continuous conformity assessment and labeling of AI systems (referencing a forthcoming “421‑06” standard).
- BIS representative presented mementos to all panelists and the moderator.
- Vote of thanks delivered by Mrs. Shithesh Bhatla (Member, Security of AI Committee), reaffirming the session’s focus on standard‑driven responsible AI.
Key Takeaways
- Trusted AI is multidimensional; trust criteria differ by domain, stakeholder, and system autonomy.
- Existing standards (ISO SC‑42, ISO IEC 42001) provide a horizontal foundation but lack sector‑specific guidance needed for real‑world deployments.
- International bodies (ISO, IEC, OECD, ML Commons) play a crucial role in pre‑standardization and benchmark provision, enabling standards to keep pace with fast‑moving AI technology.
- Certification (ISO IEC 42001) is already adopted by major AI providers; however, continuous assessment and product labeling are essential to reflect the dynamic nature of AI systems.
- India’s AI Governance Framework operates at a high‑level; standards translate policy into actionable processes, defining accountability across the AI value chain.
- Agentic and generative AI introduce novel risks (hallucinations, autonomous actions, IP infringement) that demand expanded standards focusing on resilience, robustness, and ethical tenets.
- A stage‑wise regulatory approach—mandatory standards for high‑risk sectors, guidance‑only for lower‑risk domains—is recommended to balance innovation with user safety.
- Continuous “trust‑by‑design” across the AI lifecycle, anchored by ISO IEC 42001/42007 and complemented by sector‑specific extensions, is the primary actionable step for organizations.
- Labeling/benchmarking mechanisms (e.g., forthcoming “421‑06” standard) will help end‑users quickly assess an AI system’s compliance status.
- Collaboration among government, standards bodies, industry, and academia is essential to develop configurable, context‑aware standards that accommodate India’s linguistic and sectoral diversity, as well as future technologies like quantum computing.
See Also: