Beyond Digital Empires: AI Implementation for Middle Powers

Detailed Summary

  • Moderator introduced Akash Kapur and handed the floor to him.

  • Akash explained that the session title packs two inter‑related strands:

    1. “Beyond Digital Empires” – the macro‑geopolitical layer (sovereignty, agency, the risk of digital colonialism).
    2. “AI Implementation” – the micro, on‑the‑ground practice of building models, data pipelines, and compute infrastructure in middle‑income nations.
  • He set the agenda: start with the big picture, then drill down to concrete implementation.

2. India’s Perspective on AI Sovereignty (Shashi Vempati)

  • Geopolitical drivers – COVID‑19 vaccine shortages, the Russia‑Ukraine war, and abrupt platform policy changes illustrated why India cannot rely on foreign platforms for messaging or essential services.
  • Supply‑chain risk – semiconductor shortages highlighted the need to “de‑risk” the entire stack, from chips to cloud platforms.
  • Cultural‑linguistic risk – major foundation models trained on predominantly English‑centric data neglect India’s 22 official languages and >100 dialects; many dialects have no digital footprint at all.
  • Conclusion – building home‑grown models is essential to preserve both strategic sovereignty and cultural relevance.

3. Global Landscape of Sovereign AI Projects (Pablo Chávez)

  • Introduced a Sovereign AI Index (Center for New American Security).

    • 2024: ~30 countries were running ~40 sovereign‑AI projects.
    • 2025/early 2026: the number rose to ~130 projects across >50 countries.
  • India’s position – listed as a “middle power” and one of four nations (India, South Korea, Taiwan, Sweden) that are active across all three layers tracked by the index:

    1. Model building
    2. Infrastructure development
    3. Data initiatives
  • Implication: the sovereign‑AI trend is real, data‑backed, and accelerating.

4. Trade, Innovation, and Security Concerns (Natalie Black)

  • No single definition of AI sovereignty; the meaning shifts depending on stakeholder (government, industry, consumer).

  • Three analytical lenses:

    1. Innovation – Will sovereignty foster or hinder R&D and market entry for domestic firms?
    2. User‑level impacts – Access, privacy, and the ability of citizens and businesses to use AI services safely.
    3. Security – Cyber‑risk, data protection, and the broader question of how much of the AI stack a nation can actually control.
  • Emphasised that sovereignty is a spectrum, not a binary switch; middle powers must decide where on that spectrum they want to sit.

5. Frugal Infrastructure & Open‑Source Approaches (Kate Kallot)

  • Frugal infrastructure – providing “choice” over where compute, data, and models are hosted, rather than assuming every country can build massive gigawatt‑scale data centres.

  • Key observations

    • 2.2 bn people worldwide are unconnected; their languages and knowledge are largely missing from major LLMs.
    • Minimum viable compute – Identify the smallest, most cost‑effective hardware needed for a country’s priority use‑cases (e.g., agricultural advice, educational tutoring).
  • Stack‑by‑stack approach

    1. Data – Digitize analog records, curate culturally‑relevant datasets, and keep critical data resident within national borders.
    2. Models – Use open‑source LLMs as a base, fine‑tune them on local data, or develop small, highly‑specialised models for sector‑specific needs (education, climate‑resilience, etc.).
    3. Infrastructure – Deploy micro‑data‑centres powered by renewable energy; enable edge‑computing solutions that bring AI close to the user.
  • Risk of “digital colonialism” – Africa controls <1 % of global compute despite housing 18 % of humanity; without local infrastructure, AI value continues to flow outward.

6. Degrees of Sovereignty & Partnerships

  • Panel consensus – Full, isolated sovereignty is unrealistic. Most public‑sector AI projects already involve partnerships with foreign technology firms.

  • Strategic approach – Identify critical layers (e.g., data sovereignty, model control) and negotiate targeted partnerships that minimise strategic risk while still leveraging external expertise.

7. Open‑Source Models: Opportunity & Tension

  • Pablo: ~40 % of tracked sovereign projects are fine‑tuning open‑source models (e.g., LLaMA, newer Chinese models).

  • Kate: Open‑source is the only viable avenue for many Global‑South developers because:

    1. Accessibility – No licensing fees; the code can be run on modest hardware.
    2. Performance – Recent Chinese LLMs (e.g., Kun, Qwen) have surpassed older open‑source models on benchmark tests, making them attractive choices.
  • Edge‑AI & Direct‑to‑Mobile (D2M) – Emerging standard (AT‑SC3) and Indian‑designed chip (Sankhia Labs) allow secure distribution of models to phones, mitigating the risk of malicious model injection.

8. Risk – From Existential to Societal

  • Risk spectrum discussed:

    • Existential/global (AI takeover) – considered over‑hyped for immediate policy.
    • Cyber‑security – data breaches, model poisoning, cross‑border legal exposure (e.g., U.S. Cloud Act).
    • Socio‑economic divide – Urban elite vs. rural masses in India; unequal AI literacy leading to a “digital‑AI divide.”
  • Pablo highlighted that public‑sector sovereign AI projects often still depend on foreign compute providers, reinforcing structural dependencies.

  • Natalie stressed that regulators should focus on outcome‑based goals (fairness, transparency, security) rather than technology‑specific rules, using sandboxes and innovation licences to stay agile.

9. Audience Q&A – Main Themes

QuestionSpeaker(s) ResponseCore Insight
Consumer choice vs. sovereignty – Will users default to superior Western models, making sovereign models irrelevant?Pablo – Governments currently purchase Western models for performance but are slowly demanding local investment and considering regional alternatives.Sovereignty is a strategic risk‑mitigation rather than a zero‑sum competition.
Collective bargaining for the Global South – Can a bloc negotiate better terms with hyperscalers?Pablo – Examples: EU, India’s AI Mission, Canada, ECOWAS. Collaborative procurement (e.g., “hub‑and‑spoke” compute pools) is emerging but still early.Regional cooperation can improve bargaining power, yet outcomes are uncertain.
Strategic partnership vs. structural dependency – How to differentiate?Panel (general) – Most nations currently depend on external infrastructure; the goal is to reduce rather than eliminate that dependency over time.Recognise dependency, manage it, and incrementally build domestic resilience.
Why Chinese LLMs dominate the Global South?Kate – Open‑source availability + superior performance on benchmarks. Many African developers use ARM‑based edge devices that align with Chinese model ecosystems.Open‑source + performance = adoption.
Individual vs. national sovereignty – If individuals control their own AI exposure, does that solve national concerns?Panel (general) – Individual agency matters but does not fully substitute for national policy; collective security, public‑service use‑cases still require state‑level control.Both levels of sovereignty are needed; they are complementary, not interchangeable.
Regulatory harmonisation across jurisdictions – Can competition law be unified?Natalie – Full harmonisation is unlikely; focus on outcome‑based regulation (fairness, transparency) that can be implemented locally.Pragmatic approach over universal legal convergence.
Feasibility of shared compute infrastructure for middle powersPablo & Kate – Highlighted existing models (EU compute network, India’s fractional‑time hyperscaler programme, ECOWAS hub‑and‑spoke). Still “early days” – more evidence needed in 12‑18 months.Collaborative compute pools are promising but not yet proven.
Balancing open‑source openness with strategic riskPablo – Even sovereign projects often release their models openly; openness can level the playing field but also creates strategic exposure.Open‑source is a double‑edged sword – it democratises but may dilute control.

10. Closing Remarks

  • The moderator thanked the panelists and the audience, announced the end of the session, and transitioned to the next program slot. No formal announcements (product launches, partnerships) were recorded.

Key Takeaways

  • Sovereign‑AI is a growing, data‑backed trend – from 40 projects in 2024 to >130 projects across 50+ countries by early 2026 (per the Sovereign AI Index).
  • India exemplifies a “full‑stack” sovereign AI effort, active in model building, infrastructure, and data initiatives; its experience is a template for other middle powers.
  • Cultural and linguistic inclusion is a core driver of AI sovereignty; dominant LLMs often ignore the linguistic diversity of middle‑power societies.
  • Open‑source models are the primary vehicle for capacity building in the Global South, driven by accessibility and competitive performance (especially Chinese LLMs).
  • Full technological independence is unrealistic; a pragmatic strategy is to identify critical layers, secure them through targeted partnerships, and gradually reduce structural dependencies.
  • Frugal, micro‑data‑centre and edge‑computing solutions can provide affordable, locally‑controlled compute for a wide range of public‑service applications.
  • Regulation should be outcome‑focused (fairness, transparency, security) rather than prescriptive, using sandboxes and innovation licences to stay agile.
  • Risk management must be multi‑dimensional: beyond existential concerns, focus on cyber‑security, data residency, and the socio‑economic AI divide within societies.
  • Regional collaboration (e.g., EU, ECOWAS, India’s AI Mission) offers a pathway to collective bargaining power and shared compute infrastructure, though measurable success will likely materialise only after 12‑18 months.
  • Individual AI sovereignty complements but does not replace national sovereignty; policy must address both personal data agency and collective security needs.

Prepared from the verbatim transcript of the panel “Beyond Digital Empires: AI Implementation for Middle Powers” recorded at the AI conference in Delhi.

See Also: