Building a Trusted and Resilient AI Infrastructure Ecosystem: Balancing Innovation, Security, and Rights

Abstract

The panel explored what constitutes a secure, trustworthy AI ecosystem in India and globally. Participants addressed technical resilience of data‑centres, regulatory clarity, the role of multistakeholder platforms such as the IGF, organisational bottlenecks to AI adoption, and the need for hybrid‑cloud‑edge architectures. The discussion highlighted a “hybrid” model (hyperscalers + private cloud), stressed the importance of governance embedded at code‑level, and called for lightweight, sector‑specific regulation supported by best‑practice self‑regulation. An accompanying research report was announced for public release.

Detailed Summary

The moderator introduced the panel, explained the rapid‑fire format (two questions per panelist, each with a three‑minute response), and announced a forthcoming report launch.


2. Resilience by Design – Infrastructure Perspective

Speaker: Raju Vegesna (Sify)

  • Core definition – Resilience means “always‑on” operation, meeting Service Level Agreements (SLAs) and quality standards.
  • Key dimensions of a resilient data‑centre:
    1. Physical design – high‑density racks, advanced plumbing, and robust power distribution.
    2. Power reliability – dual sources (grid + renewables) and uninterrupted supply.
    3. Operational intelligence – applying AI to monitor thousands of parameters for predictive maintenance.
  • Mission‑critical workloads (e.g., NPCI, UPI, digital‑yatra) cannot tolerate downtime; therefore, resilience must be baked in from day 1.
  • Cost‑effectiveness – Indian deployments must achieve resilience without prohibitive CAPEX, balancing price with reliability.

Key Insight: Resilience is not a “nice‑to‑have” feature but a baseline requirement for any AI service that underpins financial or public‑sector transactions.


Speaker: Shahana Chaterji (Legal)

  • Three priority regulatory needs:
    1. Utility reliability – clear, enforceable standards for electricity, water, and other critical resources (currently fragmented across Indian states).
    2. Tax regime – a “stimulating” fiscal framework is still being refined; existing budget proposals need more detail to attract investment.
    3. Risk‑classification & liability – while a formal framework is not yet essential, market‑driven contractual mechanisms (e.g., liability allocation clauses) must be documented.
  • Best‑practice guidance – organisations should develop templates for liability, data‑privacy (DPDP Act), and cross‑jurisdictional data‑use without waiting for exhaustive legislation.
  • Interpretation gap – practitioners need assistance translating the emerging DPDP Act into actionable compliance steps for AI model training.

Key Insight: The most immediate policy levers are utility guarantees and tax incentives; liability allocation can be handled through industry‑wide best‑practice contracts.


4. Global Multistakeholder Platforms – The IGF Role

Speaker: Chengetai Masango (UN‑IGF)

  • Capacity‑building – The IGF helps developing nations acquire the expertise to shape AI‑infrastructure norms that respect local contexts.
  • Bottom‑up process – Priorities are set by regional stakeholders; the U.S., India, and others pursue distinct agendas, preventing a one‑size‑fits‑all model.
  • Key outcomes from recent IGF (Lillestrom, Norway):
    • Over 6,000 participants advocated for principles‑based governance rather than prescriptive models.
    • Emphasis on “fit‑for‑purpose” approaches that match digital maturity, data ecosystems, and institutional capacity.
  • Support mechanisms – Youth initiatives, regional training, and policy‑network reports amplify voices of the Global South.

Key Insight: The IGF does not legislate but aligns diverse stakeholders, reducing the risk of a fragmented global AI‑infrastructure landscape.


5. Structural Bottlenecks – Enterprise Perspective

Speaker: Ankit Bose (NASSCOM)

  • Misaligned KPIs – Business, technology, risk, and finance teams operate with divergent targets, causing friction in AI project approval.
  • Five concrete hurdles:
    1. ROI vs. budgeting mismatch – expectations of rapid returns clash with long‑term investment cycles.
    2. Domain understanding & guardrails – lack of shared comprehension of AI’s risk surface.
    3. Tool availability – enterprises lack mature, integrated AI‑governance tooling.
    4. Talent scarcity – shortage of professionals who can bridge business, risk, and technical dimensions.
    5. Customer‑centric coherence – difficulty aligning AI outputs with real‑world user needs.
  • Recommendation – Treat AI not as a compliance box‑ticking exercise but as a scalable, enterprise‑grade capability requiring unified governance.

Key Insight: Cross‑functional alignment and strategic budgeting are prerequisites for successful AI adoption at scale.


6. Integration Barriers & Embedding Resilience – Nayara Energy

Speaker: Rishikesh Kanegaonkar (Nayara Energy)

  • Structured methodology:
    1. Ideation – originates from business units, not IT.
    2. Up‑front ROI calculation – break down total cost of ownership (TCO) into: token usage, talent, infrastructure, operations, and ethical governance.
    3. Minimum Lovable Product (MLP) – replace “pilot” with an MLP that is production‑ready, with built‑in guardrails and data‑integration checks.
    4. Iterative loop – measure actual value versus projected ROI, refine the model, and repeat.
  • Governance layers – need clear “agent‑to‑agent” protocols (MCP) that delineate responsibilities among AI components.
  • Security & ethical oversight – must be woven into every phase, from data readiness to post‑deployment monitoring.

Key Insight: A disciplined, ROI‑first approach coupled with an MLP mindset prevents pilot fatigue and ensures resilience from day one.


7. Balancing Hyperscale & Edge – Hybrid Model

Speaker: Raju Vegesna (Sify) – second turn

  • AI as a journey, not a project – continuous evolution mirrors human intelligence.
  • Phase 1 (hyperscale) – limited to a handful of global giants (US & China) with massive GPU farms; India is still in the learning stage.
  • Phase 2 (use‑case definition) – Indian enterprises must identify high‑value applications and adopt a hybrid model (public hyperscalers + private cloud).
  • Three Indian imperatives:
    1. Scale – architecture must handle massive workloads.
    2. Cost‑effectiveness – not cheap, but financially sustainable.
    3. Long‑term sustainability – energy‑efficient, renewable‑powered infrastructure.
  • Geographic spread – beyond metros, tier‑2/3 cities need low‑latency edge nodes (≈30 ms to Mumbai) to support AI services.

Key Insight: A hybrid hyperscaler‑plus‑private‑cloud strategy aligns India’s cost constraints with the need for nationwide, low‑latency AI services.


Speaker: Shahana Chaterji

  • Light‑touch, technology‑agnostic baseline – rely on existing Indian statutes to address AI risks.
  • Sector‑specific augmentations – regulators (e.g., banking, telecom) should tailor additional safeguards (content labeling, transparency disclosures).
  • Self‑regulation & best practices – industry‑driven governance frameworks remain critical for rapid ROI and compliance.
  • Risk of over‑regulation – a heavy‑handed AI Act‑style law could stifle innovation; India’s measured approach aims to avoid that.

Key Insight: Combining a light‑touch horizontal framework with robust sector‑specific rules offers a balanced path to innovation and safety.


9. Evidence‑Based Policy Networks – IGF Follow‑up

Speaker: Chengetai Masango (second turn)

  • Policy Network on AI (PNAI) – synthesises evidence from diverse stakeholders to inform policy without imposing binding rules.
  • Voice of developing nations – ensures that data‑sovereignty concerns and local capacity‑building are reflected in global debates.
  • Paradox of scaling vs. sovereignty – the IGF mediates between corporate demand for global scale and national calls for data localisation.

Key Insight: Multistakeholder evidence generation reduces fragmentation while respecting differing national priorities.


10. Trust‑by‑Design – Audience Contribution

Speaker: Sahar (Audience, AI/ML practitioner)

  • Open‑source “Glassbox” – contrasts with opaque “Blackbox” models, aiming to embed trust at the design stage.
  • From PoC to production – trust gaps often appear when systems go live; suggests constitutional frameworks within AI pipelines (auditability, templates, continuous monitoring).
  • Recommendation – develop reusable trust‑by‑design patterns, embed compliance checks into CI/CD pipelines, and create audit tools that do not rely on exhaustive regulator oversight.

Key Insight: Embedding transparency mechanisms directly in the software development lifecycle is essential for scaling responsible AI.


11. Desired AI Department – Panel Round‑up

Prompt: “What characteristics should an AI department have?”

  • Ankit Bose: Cross‑functional team with tech, business, and compliance expertise; code‑level compliance (e.g., “compliance as code”).
  • Raju Vegesna: Multimodal department encompassing budget, talent acquisition, and governance; needs to align pace of technology with business value.
  • Shahana Chaterji: Empowered, possibly board‑level, with authority over procurement, deployment, monitoring, and audit.
  • Chengetai Masango: Strong capacity‑building function; should partner with external multistakeholder networks to stay updated on global best practices.

Key Insight: The AI department must be empowered, cross‑disciplinary, and equipped with automated compliance tooling to drive responsible adoption.


12. Power & Digital Infrastructure – Energy Perspective

Speaker: Rishikesh Kanegaonkar (second turn)

  • Digital over physical – India’s growth hinges on digital infrastructure (grid, renewable energy) rather than traditional roads/metros.
  • Renewables as backbone – Solar and wind already provide ample capacity; nuclear still untapped.
  • Investment imperative – Parallel investment in power grids and AI data‑centres is essential for nationwide AI enablement.

Key Insight: Reliable, renewable‑powered electricity is a non‑negotiable foundation for a trusted AI ecosystem.


13. Ownership & ROI of AI Projects – Business View

Speaker: Ankit Bose (final turn)

  • Value creation resides with business units – AI projects are not IT initiatives; they are process‑oriented and must deliver measurable ROI.
  • Governance mandate – Top‑level management must institutionalise AI governance to ensure alignment, monitoring, and sustained value.

Key Insight: Clear ownership (business side) and governance (executive oversight) are required to translate AI investments into real outcomes.


14. Data Sovereignty & Global Cooperation

Speaker: Chengetai Masango (final turn)

  • Balancing act – Nations must protect data sovereignty while enabling global AI scale.
  • Policy tools – Favorable public‑policy instruments, incentives for local data‑centre development, and technology transfer facilitate this balance.
  • Reference – The “WSS Plus 20” resolution emphasised participation of developing countries in digital governance.

Key Insight: Thoughtful policy can reconcile national data‑sovereignty concerns with the need for cross‑border AI collaboration.


15. Report Launch & Closing

  • Announcement – The panel’s joint research report (prepared by Sify, NASSCOM, and other partners) was unveiled. Copies are available at the Sify booth.
  • Group photograph – Organisers invited attendees for a photo.
  • Final thank‑you – Moderator thanked panelists and audience, signalling the end of the session.

Announcements

WhatWhoDetails
Research Report ReleaseJoint panel (Sify, NASSCOM, etc.)Physical copies at Sify booth; digital version to be shared later.
Capacity‑Building InitiativesIGF / PNAIOngoing training, youth programs, and regional workshops to support AI governance.
Call for Cross‑Sector GovernanceAll panelistsEmphasis on establishing empowered AI departments and embedding compliance as code.

Key Takeaways

  • Resilience is foundational – Data‑centres must be always‑on, powered by renewable energy, and monitored via AI‑driven predictive tools.
  • Regulatory focus should be on utilities and tax incentives – Heavy‑handed AI legislation risks stifling growth; light‑touch, sector‑specific rules are preferred.
  • Hybrid cloud‑edge architecture – Combining hyperscalers with private cloud and edge nodes enables cost‑effective, low‑latency AI services across India’s diverse geography.
  • Cross‑functional AI departments – Successful AI adoption requires empowered teams that blend tech, business, risk, and compliance, with “compliance as code” baked into pipelines.
  • Evidence‑based multistakeholder platforms (IGF) help align global AI‑infrastructure norms while preserving national data‑sovereignty concerns.
  • Trust‑by‑design must be engineered – Open‑source transparency tools and constitutional frameworks inside AI systems are essential for moving from PoC to production.
  • ROI‑first methodology – Define total cost of ownership (tokens, talent, infrastructure, governance) before code is written; iterate with a Minimum Lovable Product rather than a fragile pilot.
  • Talent scarcity and KPI misalignment are the top organisational barriers; addressing them through unified metrics and upskilling is critical.
  • Power infrastructure is a strategic enabler – Renewable‑based electricity supply underpins all AI‑infrastructure ambitions.
  • Ownership of AI value rests with business units – IT acts as an enabler; governance must be led from the top to ensure sustained impact.

End of Summary

See Also: