Pathways for Equitable AI Compute Access

Abstract

The panel explored the political‑economy of AI compute, arguing that while AI models are globally applicable, the underlying compute resources remain highly concentrated. Participants examined the “shared compute hub” concept as a policy‑driven, jointly‑financed, and transparently governed infrastructure that could democratise access for the Global South. The discussion wove together perspectives on open‑source precedents, digital sovereignty, industry‑government partnerships (with a focus on India’s rapid data‑centre expansion), and corporate responsible‑AI practices, ending with a call for concrete governance frameworks and coordinated international action.

Detailed Summary

  • Moderator’s introduction – Rohit Kumar welcomed the audience, outlined the session’s focus on “shared compute hubs” and stressed that the conversation would move beyond the “glamour” of AI applications to the structural issue of compute inequality.
  • Impact framing – Sangeeta Gupta asked the rhetorical question, “Impact for whom and powered by whose compute?” She highlighted that while AI’s benefits are borderless, compute resources are geographically concentrated, shaping who can innovate, govern, and reap societal impact.

2. Why Compute Is a Critical Bottleneck (Linda Griffin)

SpeakerKey Points
Linda Griffin (Mozilla)• AI progress is bottlenecked by limited access to high‑performance compute.
• Concentration of compute leads to a “winner‑takes‑all” dynamic: fewer choices for research, less accountability, and limited pathways for newcomers.
• Historical analogy to the early web: open‑source projects (Linux, Apache, Mozilla) broke infrastructure monopolies, enabling affordable, widely‑available tools.
• Without broad compute access, competition collapses, stifling innovation; market forces alone are insufficient to correct this.
• Calls for policy interventions and technical innovation (e.g., shared compute hubs) to address the imbalance.

3. India’s Experience: Building Compute at Scale (Sunil Gupta)

  • Rapid growth – In the last two years India has moved from virtually no AI‑grade compute to a burgeoning ecosystem of GPU‑dense data centres.
  • Hybrid financing model – The Indian government has acted as a marketplace: it does not build data centres itself but funds GPU purchases and then contracts private firms to provide compute on a consumption basis.
  • Policy incentives – States compete for data‑centre investment by offering free land, stamp‑duty waivers, reduced power tariffs, and other fiscal perks.
  • Scale & ambition – Current capacity is ~1.4 GW, projected to reach 6–10 GW within a few years; private operators such as Yotta, NVIDIA, and international funds (e.g., Blackstone) are already committing capital.
  • Challenges noted
    • Ensuring reliable power and cooling for GPU‑heavy hubs.
    • Coordinating land‑acquisition and regulatory approvals across federal and state levels.
    • Balancing “Jugaad” frugal engineering with the need for high‑quality, resilient infrastructure.

4. The Shared Compute Hub Concept (Robert F. Trager)

  • Concept definition – A jointly‑financed, regionally‑shared compute facility governed by an independent board, offering proportional access based on investment shares.
  • Digital sovereignty angle – Many countries (outside the US and China) lack bargaining power to secure favourable pricing or guaranteed access to frontier models. Shared hubs could provide a public‑good layer of compute that underpins digital sovereignty.
  • Governance considerations – The hub must be transparent and politically neutral to avoid domination by any single state or corporation.
  • Geopolitical parallels – Reference to Canada’s push (at Davos) for “middle‑power” collaboration on strategic AI infrastructure.

5. Microsoft’s Responsible‑AI Perspective (Hector De Rivoire)

  • Diffusion statistics – 1 in 6 people globally use generative‑AI tools; usage is 24.7 % in the Global North vs. 14.1 % in the Global South, a widening gap.
  • The “Three T’s” framework:
    1. Technology – Availability of compute, connectivity, and power.
    2. Trust – Robust security, privacy, fairness, and transparent governance.
    3. Talent – Skilled workforce capable of deploying AI responsibly.
  • Microsoft’s actions – Ongoing investment in data‑centre capacity in the Global South, support for local AI ecosystems, and a Responsible AI office (since 2019) guided by principles of privacy, inclusion, transparency, and fairness.
  • Rule‑of‑law & competition – Emphasised the need for clear regulatory standards and “fair competition” on Azure, where a plural of models (OpenAI, Cohere, DeepSeq, etc.) coexist.
  • Digital sovereignty debate – Argued that no single state or company should dominate every layer of the AI stack; instead, influence should be asserted where a nation has strategic leverage (e.g., data curation, specialized models, regulatory standards).

6. Re‑framing AI Sovereignty (Pei‑Chin Tay)

  • Strategic‑agency view – Sovereignty should be recast as strategic agency: governments make deliberate choices about governance, assessment, and deployment of AI.
  • Control‑Steer‑Depend (CSD) Framework
    • Control – owning critical layers (e.g., data, models).
      - Steer – shaping market direction through policy levers.
      - Depend – leveraging external assets when internal capacity is lacking.
  • Focus on inference – While much attention is on training infrastructure, the inference phase (delivering AI services to citizens) is where societal impact is realized and where shared hubs can add value.
  • Small‑model strategy – Emphasised that many countries may benefit more from “small, efficient models” rather than chasing frontier models, thereby reducing compute demand.

7. Geopolitical Tensions & the AI Trilemma (Panel Discussion)

  • AI Trilemma (John Haig, Harvard Kennedy School) – Simultaneously maximising speed of development, competitive advantage, and democratic oversight is infeasible; excelling in any two sacrifices the third.
  • Panelists’ consensus – Shared compute hubs can mitigate the trilemma by providing a common‑wealth of resources that balances speed and oversight, but only if trust‑building and clear governance principles are established.
  • Open‑source lessons – Mozilla’s Linda stressed that open‑source collaboration demonstrated the power of decentralisation and that any shared‑hub framework should embed openness, interoperability, and public‑interest safeguards.
  • Principles for international cooperation
    1. Transparent governance – Public‑interest‑driven charter, no surprise unilateral policy changes.
      2. Shared funding & risk – Joint financing reduces the burden on any single nation.
      3. Technical standards – Interoperable APIs, open‑source components, and auditable audit trails.

8. Indian Industry Perspective (Sangeeta Gupta – Nasscom)

  • Ecosystem health – India’s compute market is expanding rapidly, yet demand outstrips supply; the “Hybrid Model” (government‑backed marketplace plus private‑sector risk‑taking) is deemed a differentiator.
  • Policy‑implementation gaps – Coordination between central and state policies remains fragmented; a unified “Digital‑Athra”‑style framework is needed to streamline land acquisition, power provisioning, and incentives.
  • Energy concern – Emphasised the need for renewable‑energy‑backed compute to ensure sustainability as capacity scales.

9. Closing Reflections (All Panelists)

  • Consensus on collaboration – No single country can build the full AI compute stack alone; shared hubs, open‑source models, and multilateral agreements are essential.
  • Upcoming announcements – Sunil hinted at imminent partnerships involving Yotta, NVIDIA, and other private‑sector investors that could accelerate India’s compute capacity.
  • Call to action – Panel urged the AI community to develop concrete governance charters, fund the “inference” layer, and continue interdisciplinary dialogue throughout the India AI Summit 2026.

Key Takeaways

  • Compute is the new bottleneck – Access to high‑performance compute dictates who can develop, test, and apply AI; its concentration entrenches power imbalances.
  • Shared compute hubs – Jointly financed, regionally hosted, and transparently governed compute facilities could democratise access, especially for the Global South.
  • India’s hybrid model – Government acts as a marketplace, providing funding and regulatory incentives while private firms build and operate GPU‑dense data centres; this model is scaling rapidly but still faces coordination and energy‑supply challenges.
  • Three T’s for equitable AI – Technology (infrastructure), Trust (responsible‑AI safeguards), and Talent (skilled workforce) must be simultaneously strengthened to close the adoption gap (24.7 % vs. 14.1 % usage between North and South).
  • Digital sovereignty re‑defined – Rather than owning the entire stack, nations should aim for strategic agency—the ability to steer, depend on, and control specific layers of AI infrastructure.
  • Control‑Steer‑Depend framework – Provides a practical lens for nations to decide where to invest, where to partner, and where to rely on external providers.
  • Open‑source precedents matter – The success of Linux, Apache, and Mozilla illustrates how decentralized, community‑driven infrastructure can break monopoly power; similar openness should be baked into compute‑hub governance.
  • AI Trilemma mitigation – Shared hubs may balance rapid innovation with democratic oversight, but only if trust‑building, clear rules of law, and public‑interest safeguards are embedded.
  • Policy‑level recommendations – Establish transparent charters, align central‑state incentives, adopt renewable‑energy‑backed compute, and create interoperable, open‑source standards for hub operations.
  • Immediate next steps – Follow‑up announcements from Yotta, NVIDIA, and other investors; publish the Oxford‑Martin shared‑hub paper; convene a working group to draft an international code of conduct for compute‑access governance.

See Also: