Panel Discussion on Reimagining Responsible AI: Frameworks, Safeguards, and Standards for the Next Wave of AI Innovation
Abstract
The panel explored how responsible‑AI frameworks must evolve to keep pace with emerging agentic and generative AI systems. Participants highlighted the primacy of trust—both in data handling and model behavior—while debating practical governance mechanisms such as observability, “trust layers,” compliance‑as‑APIs, default‑opt‑out data policies, and explainability. A brief Q&A examined the role of small‑language models (SLMs) versus large‑language models (LLMs) for enterprises. The session closed with a call to adopt the new Responsible‑AI readiness assessment tool released by The Dialogue and ICOM, and an invitation to join upcoming AI‑policy conversations.
Detailed Summary
- The moderator thanked the audience repeatedly (a transcription artifact) and set the stage: responsible AI must now move from “check‑the‑box” compliance to trust‑by‑design and observable safeguards.
- He introduced the panelists and underscored the agenda: governance mechanisms, lifecycle oversight, risk‑assessment tools, and global standards, with a focus on agentic AI.
2. Trust, Data Sovereignty & the “AI‑with‑Purpose” Tagline (Panelist: Ashish Tewari)
- Key Insight: Trust stems from sovereignty over data. When clients retain control of their data, confidence in AI outcomes rises.
- Emphasised the emerging industry slogan “AI with purpose and trust.”
- Noted that while new AI startups are proliferating, established players still dominate delivery pipelines; however, delivery timelines are stretching, creating a market need for observable, trustworthy AI.
- Recommendation: Startups should showcase trust mechanisms (e.g., auditability, bias checks) alongside innovation to win enterprise contracts.
3. Salesforce’s Trust‑First Approach (Panelist: Arundhati Bhattacharya)
- Trust as a Core Value: Salesforce’s top value is trust, followed by customer success, innovation, equality, and sustainability.
- Cloud‑Native Architecture: Salesforce operates exclusively on the cloud (no on‑premises systems). This choice reduces the capital burden of building massive compute/storage infrastructure required for AI that processes “trillions of data points.”
- Capital‑Intensive Compute:
- Large‑scale AI demands massive storage and compute; on‑prem solutions in India would require prohibitive hardware investment.
- Therefore, logical security and trust must be embedded in cloud services.
- The “Trust Layer” – a multi‑faceted safeguard that:
- Guarantees data does not leave the tenant.
- Detects toxicity, bias, and hallucination in model outputs.
- Provides responsible, reliable results before they reach the end‑user.
- Product Timeline: Salesforce’s enterprise‑grade Copilot offering was delayed because the trust layer was being hardened—demonstrating that trust outweighs speed‑to‑market for responsible AI.
- Takeaway: Trust‑centric design is non‑negotiable for enterprise AI; security, bias mitigation, and hallucination control are integral components.
4. Governance as APIs & Default‑Opt‑Out Data Policies (Panelist: “Karna”)
- Probabilistic Nature of AI: AI models are statistical, not deterministic, which clashes with traditional compliance expectations for absolute correctness.
- Compliance‑as‑APIs:
- Proposed exposing regulatory rules (e.g., SEBI guidelines) as reusable APIs that can be plugged into any AI agent.
- This transforms compliance from a paper‑heavy process into a software‑driven infrastructure, accelerating ecosystem adoption.
- Default‑Opt‑Out for Customer Data:
- Default should be “no” for using customer data to train LLMs.
- An opt‑in approach respects privacy and forces startups to justify data usage, preventing the “race to the bottom” where data is harvested indiscriminately.
- Explainable AI (XAI):
- XAI must be a core output of AI services, not an afterthought.
- Embedding explainability into APIs enables partners to understand and trust model decisions in real‑time.
- Goal: Achieve P99‑level accuracy while accepting that probabilistic error (P80‑P99) is inevitable, and focus on governance infrastructure rather than attempting to eliminate uncertainty entirely.
5. Audience Q&A – Small vs. Large Language Models (Moderator & Panelists)
- Question: “Do small language models (SLMs) present a profitable path for businesses, or will the market continue to gravitate toward large language models (LLMs) leading to a race to the bottom?”
- Response (Panelist, likely Ashish):
- Most enterprises are still in early‑stage large‑scale deployments, so they benefit from LLM capabilities (breadth, knowledge, zero‑shot performance).
- Over time, as use‑case clarity emerges, organizations can migrate to SLMs to gain lower latency and cost efficiency.
- The strategic path: start with LLMs to accelerate innovation, then evaluate whether a domain‑specific SLM can replace it for production workloads.
- No further questions were entertained due to time constraints.
6. Closing Remarks & Announcements (Moderator: Kamesh Shekar & Closing Guest)
- Closing by Kamesh Shekar: thanked panelists and audience, acknowledged “event fatigue,” and highlighted the week’s intensive AI‑focused sessions.
- Announcement – Responsible‑AI Readiness Tool:
- The Dialogue and ICOM have released their first tool to assess Responsible AI readiness.
- Participants were urged to try the tool and adopt a “responsible‑by‑design” mindset from the outset.
- Future Engagements:
- The Dialogue will host further conversations on AI policy; attendees were invited to join.
- Acknowledgements: Thanked Infosys for collaboration, and extended gratitude to all speakers for their insights.
Key Takeaways
- Trust is foundational: Data sovereignty and observable AI systems are essential for enterprise adoption.
- Salesforce’s model demonstrates that cloud‑native, trust‑layered AI can outperform on‑prem solutions while reducing capital expenditure.
- Compliance‑as‑APIs can turn regulatory requirements into scalable software components, accelerating responsible‑AI deployment.
- Default‑opt‑out data policies protect user privacy and encourage ethical data usage.
- Explainable AI should be built into the service contract, not retrofitted after deployment.
- LLMs remain the pragmatic entry point for most enterprises; SLMs become attractive later for cost‑sensitive, latency‑critical workloads.
- The newly released Responsible‑AI readiness assessment tool offers a practical framework for organizations to benchmark and improve their governance posture.
- Ongoing AI‑policy dialogues are being organized; participation is encouraged to shape global standards.
See Also:
- scaling-trusted-ai-for-8-billion
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- leveraging-artificial-intelligence-in-public-audit-for-greater-transparency-and-accountability
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- ai-impact-forum-democratising-ai-resources
- governing-safe-and-responsible-ai-within-digital-public-infrastructure
- trustworthy-ai-investments-capital-allocations-as-ai-governance
- beyond-the-cloud-the-sovereign-ai-moment
- democratizing-ai-for-the-last-mile-language-access-and-trust-at-scale