Sovereign AI for India: Designing the Nation’s Future Compute, Data and Innovation Ecosystem
Abstract
The panel examined how India can achieve sovereign AI—a self‑reliant ecosystem of compute, data, models and governance that serves national priorities and constitutional values while reducing reliance on foreign AI stacks. Panelists highlighted the critical shortage of GPU‑based compute, the need for a nation‑wide data‑infrastructure stack, the importance of interoperability, collaborative research consortia, and skill‑building for a new generation of AI engineers. Government policies, shared‑compute frameworks, and private‑sector initiatives were discussed as levers to accelerate adoption and to position India as a leader for the Global South.
Detailed Summary
- Ankit Bose opened the session, announced the launch of the Sovereign AI Research Report (Amrita Vishwa Vidya Peetham) and invited Dr Manisha V. Ramesh (Pro‑Vice‑Chancellor) and Dr Shiva Ramakrishnan (Head, AI‑Safety Lab) to the stage for the formal release.
- After a brief pause (traffic delay for Prof Ganesh), the moderator introduced the panel: Prof. Ganesh Ramakrishnan, Sunil Gupta, Bhaskar Gorti, Kalyan Kumar, and Breno Melo.
2. First Core Question – “What is the single most important thing India must do to build sovereign AI capability?”
2.1 Sunil Gupta (Yotta) – Compute as the Bottleneck
- Key Insight: India’s major deficit is high‑performance GPU compute.
- Early AI breakthroughs (e.g., ChatGPT) showed that specialized GPU clusters are essential; ordinary CPUs cannot support modern model training or inference at scale.
- Yotta has accumulated ≈10,000 GPUs (mostly NVIDIA) that have trained most of the “sovereign models” showcased at the summit (e.g., Sarvam, Bharat Gen, Socket).
- Recommendation: Treat compute as a basic utility—abundant, low‑cost, and universally accessible to startups, academia, and government labs.
- Government Role: The Indian AI Mission has begun subsidising GPU purchases, redistributing chips to research institutes, and encouraging private players to pool resources.
- Future Outlook: Scaling from thousands to millions of GPUs is required to support a “UPA‑equivalent” rollout across 50 sectors, each demanding massive inference capacity.
2.2 Kalyan Kumar (HCL Software) – The Data Stack & Edge Infrastructure
- Key Insight: Beyond compute, India must build a robust data platform layer that supports vector search, edge inference, and data‑centric AI pipelines.
- HCL has acquired key database patents (Actian’s Ingres, CWI’s Vector engine) and is preparing to launch EdgeDB and a localized Vector AI engine (April release).
- Argument: While many focus on hardware, data quality, provenance, and cataloguing determine whether AI solutions can scale. A “data‑centric approach”—including data products, contracts, and catalogs—will be the differentiator.
- Edge‑Centric Vision: As AI moves to the edge (mobile, feature phones), low‑latency inference engines and distributed vector databases become essential.
2.3 Breno Melo (GenSpark.ai) – Adoption, ROI, and Organizational Alignment
- Key Insight: Most AI pilots never make it to production (≈95 % failure rate). The root causes are non‑technical.
- Three adoption barriers:
- ROI invisibility – CFOs cannot quantify AI’s return; only 1 in 10 have tools to measure it.
- Data‑trust‑compliance friction – Silos and departmental red‑tape stall projects for months.
- Champion problem – Lack of executive sponsorship prevents decisions and budgets.
- Recommendation: Build cross‑functional “champion” teams with a single point of view and executive endorsement. This accelerates budget approval, reduces friction, and drives faster time‑to‑value.
2.4 Prof. Ganesh Ramakrishnan (Bharat Gen) – Interoperability & Participation
- Key Insight: Interoperability at every stack layer is essential for democratic participation, scalability, and avoiding a “single‑vendor lock‑in.”
- Interoperability enables alternative implementations (e.g., different model families, language‑specific experts) and human‑in‑the‑loop decision making.
- Data‑product emphasis: Cataloguing data, creating data contracts, and monetising data responsibly (echoing the Prime Minister’s “data‑owner rights” mantra).
- Scale‑out vs. Scale‑up: While scaling up (larger models) is necessary, scale‑out (making AI accessible to billions) requires interoperable standards and a wide consortium (e.g., the nine‑institution IIM Indore consortium).
3. Follow‑up Question – “What is India’s moat against massive global models (trillions of parameters)?”
Prof. Ganesh Ramakrishnan
- Collaboration as Moat: Emphasised a collaborative ecosystem—government, academia, and industry co‑creating multilingual, domain‑aware models.
- Cited the Bharat Gen consortium (nine academic institutions) that builds foundation models tailored to India’s 22 official languages, with mixture‑of‑experts architecture that shares expertise across languages (e.g., Hindi‑Marathi expert, cross‑language sharing for Telugu).
- Human‑centric Design: Empathy between linguists and computer scientists ensures models handle Indian phonetics and code‑mixing patterns, giving a cultural advantage.
- Strategic Goal: Reach ≈1 trillion parameters (the “North Star”), but even a 20–100 billion‑parameter model can serve >95 % of Indian use cases.
- Open‑source & Open‑collaboration Model: Shared research, open licensing, and an MOU with global partners (e.g., US‑based Heritage Foundation) provide access to worldwide expertise while retaining sovereignty.
4. Government‑Led Shared Compute Framework
- Sunil Gupta outlined three historic governance models:
- Fully government‑owned (UPI‑style) – unrealistic for compute scale.
- Pure private‑sector – market‑driven but risk‑averse.
- Hybrid “empanelled” model – adopted by the AI Mission.
- Empanelment Process: Private providers (Yotta, Tata‑Comm, SARVAM, Bharat Gen, etc.) declare GPU capacity and receive government‑guaranteed purchases at agreed‑upon prices, ensuring financial viability.
- Result: ≈38,000 GPUs currently pooled; 20,000 more were announced by the Prime Minister.
- Future Need: Expand from training‑only to inference‑as‑a‑service (first‑cycle funding) to enable real‑world deployments (agriculture, health, education).
5. Talent & Skill Development
5.1 Ankit Bose (NASSCOM) – From Service to IP Builder
- NASSCOM’s historic strength lies in global engineering services; the pivot is toward building proprietary AI IP.
- Since 2015‑16, NASSCOM has acquired IP assets and shifted focus to research‑driven products.
- Highlights the need for “smarter people”: engineers with systems thinking, research orientation, and deep domain expertise (e.g., physics for quantum‑level compute).
5.2 Kalyan Kumar (HCL Software) – Upskilling the Developer Community
- HCL is targeting 150 k developers across India within six months, providing AI‑ready training and new SDLC frameworks.
- Curriculum overhaul: Adding specialisation tracks (e.g., data‑engineering, AI ethics) to B.Tech/M.Tech programmes to produce domain experts rather than generalists.
5.3 Bhaskar Gorti (Tata Communications) – Brief Mention
- Reinforced the necessity of industry‑government collaboration for scaling infrastructure and talent pipelines.
6. Adoption‑Accelerating Strategies (Re‑visited)
- Breno Melo reiterated three levers for mass adoption:
- Solve a real‑world problem that visibly improves daily life (e.g., automating repetitive office tasks).
- Consolidate tooling to reduce context‑switching; GenSpark’s platform aggregates LLM capabilities into a single UI.
- Handle multilingual diversity and ensure data security, as users remain skeptical about data stewardship.
7. Rapid‑Fire 30‑Second Reflections
| Speaker | Core Message (≈30 s) |
|---|---|
| Breno Melo | The biggest unknown is human‑AI interaction: we must define safe, intuitive interfaces as AI’s velocity outpaces our regulatory understanding. |
| Prof. Ganesh Ramakrishnan | Without alignment and provenance (metadata, data contracts, model transparency), AI becomes a “product” rather than a tool for humans. Education on provenance is crucial. |
| Sunil Gupta | AI must serve mass benefit, not be a “toy”. The summit’s purpose is to translate AI into tangible societal impact. |
| Kalyan Kumar | AI’s impact spans consumer, enterprise, government, and critical infrastructure. Choice of platform and human‑in‑the‑loop design are essential to preserve sovereignty. |
| Ankit Bose | NASSCOM is drafting a policy document and roadmap for sovereign AI/AGI and quantum capabilities; a QR code (displayed) invites stakeholder feedback. |
8. Closing Remarks & Future Roadmap
- Ankit Bose emphasized NASSCOM’s long‑term commitment to sovereign AI: policy drafting, industry consortia, and a public repository (QR‑code) for community contributions.
- He highlighted ongoing research into GPU‑specific model‑serving engines and the need to re‑engineer the entire stack for sovereign use.
- The panel agreed on the necessity of continuous collaboration, open‑source sharing, and government‑industry partnership to keep the momentum.
9. Announcements & Administrative Items
| Announcement | Details |
|---|---|
| Sovereign AI Research Report | Launched by Amrita Vishwa Vidya Peetham; QR code displayed for download. |
| MOU Signing | Amrita Vishwa Vidya Peetham & NASSCOM signed a memorandum of understanding on collaborative research. |
| India Chips Limited JV | HCL announced a joint venture with Foxconn to build a 16‑/32 nm fab (OSAT unit) slated for operation in ~5 years; signals a long‑term hardware sovereign capability. |
| Group Photo & Feedback | Panelists stayed for a group photograph; attendees invited to provide feedback via the QR code. |
Key Takeaways
- Compute is the cornerstone – sovereign AI cannot scale without a national, affordable, GPU‑rich compute fabric; shared‑compute empanelment is already delivering tens of thousands of GPUs.
- Data infrastructure must be built in parallel – vector databases, edge inference engines, and robust data‑cataloguing are essential to unlock AI’s full potential.
- Interoperability fuels participation – open standards, modular architectures, and data contracts allow diverse actors (start‑ups, academia, government) to co‑create and scale AI solutions.
- Adoption hinges on business value – clear ROI, executive sponsorship, and cross‑functional champion teams are required to move pilots into production.
- Collaboration is India’s moat – a consortium‑driven, multilingual, domain‑aware model ecosystem gives India a cultural and technical edge over monolithic global LLMs.
- Government’s hybrid model works – the “empanelled” approach balances private‑sector agility with public‑sector guarantees, ensuring both training and inference resources are funded.
- Skill uplift is non‑negotiable – up to 150 k developers will be reskilled; curricula are being revamped to produce specialist AI engineers and systems thinkers.
- Human‑centred design and provenance are mandatory to prevent AI from becoming a black‑box product that displaces rather than augments society.
- Future roadmap includes AGI, quantum compute, and domestic GPU design – NASSCOM is drafting policy and fostering partnerships (e.g., India Chips Ltd) to secure the end‑to‑end stack.
- Immediate call‑to‑action: stakeholders are urged to review the QR‑coded research report, contribute to the open‑source repository, and engage with the upcoming policy drafts.
See Also:
- ai-for-everyone-empowering-people-businesses-and-society
- ai-innovators-exchange-accelerating-innovation-through-startup-and-industry-synergy
- flipping-the-script-how-the-global-majority-can-recode-the-ai-economy
- artificial-general-intelligence-a-new-paradigm-of-safety-security-privacy-ethics-and-governance
- keynote-i-to-the-power-of-ai-an-8-year-old-on-aspiring-india-impacting-the-world
- democratizing-ai-resources-in-india
- scaling-trusted-ai-global-practices-local-impact