Responsible AI in Action: How Global Enterprises Are Building Trust at Scale
Abstract
The panel brought together senior AI leaders from Infosys, IBM, NVIDIA, Meta and Google DeepMind to explore how large enterprises can embed ethics, transparency and accountability into mission‑critical AI systems. The conversation moved from observations on how client expectations around responsible AI have shifted, through technical failure modes that emerge when AI is deployed at massive scale, to policy‑level questions about regulation, governance structures and the future role of trust‑graded AI. Participants shared concrete examples – from spreadsheet‑based governance to silicon‑level privacy guardrails – and debated practical trade‑offs such as the need for premium pricing for “trust‑grade” AI, the role of ads in democratising access, and the imperative of global standardisation.
Detailed Summary
- Moderator (Manish Gupta) opened the session by introducing the panelists – Geetha Gurnani (IBM), Sundar R. Nagalingam (NVIDIA), Sunil Abraham (Meta) and Syed Ahmed (Infosys).
- He highlighted the panel’s focus: how enterprises are building trust at scale while balancing speed, compliance and innovation.
- A quick photo was taken before the discussion began.
2. “What Surprises Me About Responsible AI?” – Geetha Gurnani (IBM)
- Shift‑left security mindset – In the past, security was an after‑thought; now enterprises think security first and then layer other concerns.
- Increasing prominence of trust and governance – Over the last 24 months, clients repeatedly ask “What is responsible AI?” and “What is trust?” indicating a maturing market.
- Governance on an Excel sheet – A senior leader resisted responsible AI, claiming it would block innovation; his only governance artifact was an Excel spreadsheet, underscoring the lack of scalable, automated controls.
- Key Insight: Trust becomes a prerequisite for scaling AI; without confidence in governance, organisations cannot move beyond pilot projects.
3. Scaling Failures – Sundar R. Nagalingam (NVIDIA)
- Infrastructure rarely the first point of failure. Instead, control systems that manage the infrastructure (e.g., micro‑service orchestration, security controls) break first.
- Two primary failure modes:
- Functional failure – AI services do not meet the intended use‑case performance at scale.
- Security/control failure – Small, overlooked vulnerabilities (e.g., missing validation, mis‑configured access controls) can lead to catastrophic breaches, even if the system appears to be operating “honky‑dory.”
- Key Insight: When serving billions of users, robust governance and safety layers are as critical as raw compute capacity.
4. The “MaltBot” Phenomenon – Sunil Abraham (Meta)
-
MaltBot / MaltBook were AI‑only social networks where agents created their own language and “religion.”
-
Sunil expressed skepticism toward anthropomorphising AI – he treats AI behaviour as machine “hallucination” rather than genuine agency.
-
He introduced three philosophical lenses:
- Ontology – What is the artifact (e.g., a single “weight file” controlling a model).
- Epistemology – What is the truth of that artifact; AI is a dual‑use tool where a bug for one is a feature for another.
- Security‑first / Unix mental model – Isolate AI in a user‑space sandbox; if the file only has limited privileges, the risk is contained.
-
Takeaway: Over‑humanising AI generates unnecessary fear; a technical, security‑first perspective offers clearer assurance.
5. Defining Trustworthy AI – Geetha Gurnani (IBM)
-
Terminology confusion: “Trust,” “security,” “governance,” and “compliance” are often used interchangeably, which obscures actionable goals.
-
End‑user focus: Trustworthiness is measured by whether an end user can confidently consume an AI service. Three non‑negotiable pillars:
- Security validation – The model must have passed rigorous security testing.
- Hallucination monitoring – Continuous oversight of output quality.
- Regulatory compliance – Alignment with applicable laws and industry standards.
-
Key Insight: Trustworthy AI is user‑centric; the underlying technical building blocks serve the same purpose across sectors.
6. Trust‑Buckets – Sundar R. Nagalingam (NVIDIA)
-
Sundar abstracted trust into three universal buckets applicable to any regulator or industry:
- Functional Safety – Does the AI deliver the intended outcome (e.g., robotic‑assisted surgery)?
- AI Safety – Model training, bias mitigation, scenario simulation, and validation.
- Cybersecurity – Protection against malicious tampering of the AI‑enabled device/system.
-
Accountability Gap: When an AI‑driven system fails, no single party bears responsibility (unlike a human surgeon), raising higher expectations for safety.
7. Distributed Architecture & Dual‑Use – Sunil Abraham (Meta)
- Discussed distributed deployment (e.g., Meta’s “Olama”) where errors should not scale.
- Illustrated with a prompt scenario: A user asks a model why women shouldn’t hold senior positions; the model either refuses or reframes the question, reflecting built‑in safety constraints.
- Emphasised dual‑use tension: The same model can be weaponised (e.g., generating hate speech in low‑resource languages like Santali) but must also detect such misuse.
8. Embedding Governance at Runtime – Geetha Gurnani (IBM)
- Leadership commitment is the first prerequisite – senior management must treat responsible AI as non‑optional.
- From observation to control: Governance must be gate‑kept (e.g., an Ethical Board that must approve AI proposals before they go to market).
- Automation & tooling: Manual “check‑lists” are unsustainable; enterprises need automated workflows that embed compliance checks directly into the model‑deployment pipeline.
- Enterprise risk integration: AI risk should be folded into the overall enterprise risk posture, not treated as a separate silo.
9. Silicon‑Level Privacy Guardrails – Sundar R. Nagalingam (NVIDIA)
- Answer: Yes, future GPUs and high‑performance AI chips should embed privacy and safety guardrails at the silicon level.
- Current example: NVIDIA’s DriveOS on the Drive platform (autonomous‑driving chip) provides a secure OS layer that mediates between firmware, software, and hardware, ensuring functional safety for critical domains (autonomous vehicles, aerospace).
10. Corporate Ethics Boards – Sunil Abraham (Meta)
- Highlighted the shift of corporations toward academic‑style self‑regulation (e.g., ethical review boards, publishing research).
- Cited Meta’s Trusted Execution Environment paper: transient cloud compute is spun up for edge AI tasks and then immediately destroyed, reducing privacy exposure.
- Noted the complexity of hardware‑level attacks (over 30 attack vectors) and the necessity of continuous research to stay ahead.
11. Ads as a Democratising Mechanism – Sunil Abraham (Meta)
- Discussed OpenAI’s introduction of ads in ChatGPT.
- Argued that ad‑supported models can level the AI playing field, especially in a market like India where only ~25 % of the population currently uses generative AI.
- Emphasised that gratis access (free models + ad revenue) helps bridge the AI divide, though it raises Net‑Neutrality‑type concerns.
12. Premium for “Trust‑Grade” AI – Geetha Gurnani (IBM)
- Timing matters: Enterprises will invest in premium, trust‑grade AI when AI becomes core to the business model or consumer‑facing product.
- Internal experiments may rely on cheaper, less‑governed models; high‑risk, brand‑critical use‑cases (e.g., customer‑facing services) trigger premium purchases for robust governance.
13. Global Standardisation Across Geographies – Sundar R. Nagalingam (NVIDIA)
-
Proposed a templatised approach:
- Platform safety – A base safety architecture that can be customised per jurisdiction.
- Algorithmic safety – Core algorithms are safe by design; adjustments address local regulatory nuances.
- Ecosystem safety – Supplier and vendor chains are vetted per region.
-
This three‑pronged framework enables consistent trust enforcement while respecting local regulations.
14. Open‑Source Freedom vs. Dual‑Use – Syed Ahmed (Infosys)
- Stressed that open‑source licences (e.g., BSD) preserve freedom of use, crucial for dual‑use technologies.
- When a model is wrapped into a proprietary product, those freedoms disappear, creating restrictions on downstream innovation.
- Illustrated with the hypothetical need for a Santali hate‑speech classifier – synthetic data may be required because collecting real hate speech is unethical.
15. Rapid‑Fire Global‑Regulation Poll
| Question | Consensus (Yes/No) |
|---|---|
| Should there be global alignment on AI regulation? | No (panelists split) |
| Should there be minimum technical safeguards globally? | Yes (technology‑level table stakes) |
| Are AI models outpacing governance? | Yes (unanimous) |
| Would you delay a launch of a more capable but less safe model? | Depends on use‑case (mixed) |
| Have you stopped a project due to safety concerns? | Yes (e.g., facial‑recognition on Facebook) |
| Should AGI be regulated now? | No / Not yet (general agreement) |
| Should mandatory watermarking be required for AI‑generated content? | Mixed (acknowledged need, but concern about over‑branding) |
16. Closing Remarks
- Moderator thanked the panel and the audience.
- Applause concluded the session.
Key Takeaways
- Trust is the new prerequisite for AI scale – enterprises will not invest in massive deployments until governance, security and compliance are baked in.
- Security‑first, “shift‑left” thinking is now mainstream; it precedes functional or performance considerations.
- Governance must move from Excel‑sheet checklists to automated, runtime‑enforced controls (e.g., ethical board approvals, CI/CD pipelines with built‑in compliance checks).
- Three universal trust buckets (functional safety, AI safety, cybersecurity) apply across industries and geographies; a template‑based approach enables localisation without reinventing the wheel.
- Silicon‑level privacy guardrails are on the horizon (e.g., NVIDIA’s DriveOS), signalling that hardware manufacturers will play a larger role in responsible AI.
- Ads can democratise AI access, especially in emerging markets, but must be balanced against concerns over user autonomy and net‑neutrality.
- Premium pricing for trust‑grade AI will be driven by risk exposure – consumer‑facing, brand‑critical applications justify higher spend on governance.
- Open‑source freedom is essential for dual‑use scenarios; imposing proprietary constraints too early can stifle innovation.
- Regulatory harmonisation is still fragmented; however, minimum technical safeguards should be globally recognised as a baseline.
- Rapid AI advances outpace governance; organisations must adopt proactive, anticipatory policies rather than reactive fixes.
- Mandatory watermarking for AI‑generated media is widely debated; while useful for provenance, over‑branding may blur the line between human and AI creativity.
These insights collectively illustrate the multifaceted challenge of building trustworthy, scalable AI and provide actionable directions for enterprises, policymakers, and technology providers alike.
See Also:
- scaling-trusted-ai-for-8-billion
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- ai-impact-forum-democratising-ai-resources
- governing-safe-and-responsible-ai-within-digital-public-infrastructure
- trustworthy-ai-investments-capital-allocations-as-ai-governance
- pathways-for-equitable-ai-compute-access
- ai-for-inclusive-economic-progress-the-public-services-ai-stack
- welfare-for-all-ensuring-equitable-ai-growth-across-the-worlds-largest-and-oldest-democracies