AI and the New Frontier for Economic Progress: Linking Innovation to Inclusive Growth

Abstract

The panel examined how artificial intelligence can catalyse economic transformation in the Global South while highlighting risks such as widening inequality, job displacement, and governance gaps. Panelists discussed concrete “small‑AI” use‑cases (farmers, health workers, financial inclusion), the structural differences between foundational AI models and downstream applications, and the policy & institutional levers needed to ensure AI benefits are broadly shared. The discussion concluded with a rapid‑fire forecasting round on the most promising AI‑driven outcomes and the greatest systemic risks by 2035.

Detailed Summary

Johannes Zutt (World Bank) opened the session, framing AI as a structural transformation rather than a mere technological upgrade. He emphasized three points:

  1. Scale of Opportunity – AI can enable emerging economies to leapfrog chronic development bottlenecks, especially in agriculture, health, finance, and public‑service delivery.
  2. Evidence from South Asia – World Bank analyses show that ≈15‑16 % of jobs in the region exhibit strong complementarity with AI, meaning AI can boost productivity when paired with human workers.
  3. Dual‑Edged Sword – While AI creates new capabilities, it also threatens entry‑level, knowledge‑intensive jobs. Zutt cited internal World Bank hiring data that show a decline in advertised junior professional positions over recent years.

He outlined four infrastructural constraints that could impede AI uptake in low‑resource settings:

ConstraintTypical Manifestation in Developing Countries
ElectricityUnreliable or absent grid power
ConnectivityWeak broadband or reliance on low‑bandwidth voice‑only services
Human CapitalLow literacy & numeracy; limited exposure to high‑end devices
GovernanceWeak regulatory frameworks & data‑privacy safeguards

Policy implication – The World Bank advocates “small‑AI”: affordable, locally‑relevant applications that function under limited connectivity and data environments. Zutt highlighted pilot projects in Indian states (Uttar Pradesh, Maharashtra, Kerala, Haryana, Telangana) that co‑design AI tools with governments and private‑sector partners, focusing on offline‑first functionality and trusted data pipelines.


2. Setting the Debate – Moderator’s Framing

Jeanette Rodrigues introduced the panel’s central tension: Will AI narrow or widen the development gap? She asked panelists to consider both hope (AI as a catalyst for inclusive growth) and fear (AI‑driven displacement, concentration of power). The moderator also noted that earlier AI summits were dominated by alarmist narratives; this session aims for a pragmatic, policy‑first lens.


3. Creative Destruction & Economic Structure

3.1 Ufuq Akcigit – Macro‑Economic View

Akcigit described AI’s impact through the lens of creative destruction. He distinguished two layers of the AI stack:

LayerCharacteristicsImplications
Foundational Layer (large‑scale models, compute‑intensive)High entry barriers; capital‑, data‑, and talent‑intensive; tends toward concentration among a few incumbents (US, China)Risks of monopoly power, spill‑over effects on downstream innovation
Application Layer (domain‑specific tools)Low entry barriers; enables SMEs to perform tasks previously reserved for large firms (e.g., automated accounting, marketing)Promotes creative destruction at the firm level in advanced economies

He warned that concentration at the foundational layer can spill over and stifle competition downstream. Akcigit called for early indicators to monitor concentration and suggested that policy should pre‑emptively address bottlenecks.

3.2 Challenges for Emerging Economies

Akcigit argued that, unlike advanced economies, many developing countries lack a vibrant entrepreneurial ecosystem:

  • Historically, firm size in these economies is correlated with family size or number of male children, rather than market competition.
  • Without reliable property rights, credit markets, and bankruptcy frameworks, risk‑taking is discouraged.

He concluded that AI alone cannot create dynamism; institutional reforms must accompany technology diffusion.


4. Public‑Good AI & Government Role

4.1 Michael Kremer – Development‑Finance Perspective

Kremer focused on government and multilateral intervention to capture AI’s public‑good potential. Key points:

  1. Market Failure – Private firms will prioritize profit‑maximizing AI applications; non‑commercial AI for public services (e.g., AI‑driven weather forecasts, disease surveillance) may be under‑invested.
  2. Illustrative Cases
    • Digital Identity: India’s Aadhaar system provides a trusted data backbone for a range of AI services.
    • AI Weather Forecasts: AI‑enhanced forecasts distributed to 38 million Indian farmers in 2023 improved planting decisions and increased use of hybrid seeds, demonstrating measurable productivity gains.
  3. Policy Instruments
    • Innovation funds (e.g., Development Innovation Ventures) that provide staged financing: pilot grants → rigorous evaluation → scale‑up funding.
    • AI sandboxes: regulatory sandboxes that let innovators test applications under supervised conditions.

Kremer warned that without coordinated public‑sector investment, critical AI tools will remain absent or inaccessible to the poor.

Bradford raised the issue of AI governance sovereignty for the Global South. Main arguments:

  • The AI regulatory regime is currently dominated by the US and China, with the EU offering a rights‑based approach (the AI Act).
  • Developing countries must decide whether to adopt external frameworks or craft home‑grown regulations that balance innovation with risk mitigation.
  • She highlighted the EU’s focus on fundamental‑rights protection and distribution of AI benefits, suggesting that Southern nations could borrow design principles but must adapt them to local contexts (e.g., tailoring data‑locality rules, language‑access requirements).

Bradford warned against a false dichotomy between regulation vs. innovation; instead, she advocated for innovation‑friendly regulation that addresses market failures without stifling nascent AI ecosystems.


5. Evidence‑Based Development & Impact Evaluation

5.1 Iqbal Singh Dhaliwal – J‑PAL / MIT Perspective

Dhaliwal underscored the necessity of rigorous impact measurement:

  • Randomized Controlled Trials (RCTs) remain the gold standard for evaluating AI interventions.
  • Heterogeneous treatment effects matter: a modest average gain (e.g., +0.3 SD in test scores) can hide distributional inequities if gains accrue only to top‑performing students.

Case Study – AI‑Assisted Essay Grading (Letris, Brazil)

  • An AI tool that scans essays and provides instant feedback boosted public‑school rankings in a poorest Brazilian district from 11th to 2nd in three years.
  • The AI did not replace teachers; it freed teachers from low‑value grading tasks, allowing them to focus on higher‑order feedback (structure, argumentation).

Dhaliwal emphasized three design principles for scalable AI for development:

  1. Minimum Viable Technology – solutions should work on low‑spec devices and limited connectivity.
  2. Demand‑Driven Development – tools must respond to clear user needs (e.g., teachers wanting grading assistance).
  3. Scalability vs. Pilot Success – early pilots must be designed for scale, not merely to prove a concept.

5.2 Synthesis of Evidence

Across speakers, consensus emerged that AI impact must be measured on the ground, accounting for both technical performance and human system integration (training, adoption, trust).


6. Risks, Concentration, and the Future of Labor

6.1 Akcigit on Concentration & Talent Migration

  • Market concentration in AI has accelerated since 2000, especially post‑2012 and post‑2017 foundational‑model breakthroughs.
  • Innovative resources (inventors, patents) are increasingly concentrated in large incumbent firms (≥1,000 employees).
  • Academic talent drain: top AI researchers are moving from universities to industry, reducing open‐science spillovers.

Akcigit warned that excessive concentration could starve the creative‑destruction engine, limiting the diffusion of AI benefits to smaller firms and developing economies.

6.2 Michael Kremer on Public‑Sector Adoption

  • Risks of public‑sector inertia: without strong procurement reforms, AI solutions may remain locked with a few vendors, curbing competition and driving up costs.
  • Example: AI‑enabled traffic enforcement (e.g., Microsoft Research India’s HAMS system) demonstrated a 20‑30 % reduction in unsafe driver ratings across 56 sites, but scaling requires transparent procurement and open‑access data.

6.3 Dhaliwal on Trust & Institutional Fit

  • Even well‑performing diagnostic AI tools can fail in practice if front‑line workers lack trust or proper training, leading to under‑utilisation or mis‑use.
  • Policy must address behavioral adaptation: change management, user‑centric design, and iterative feedback loops.

6.4 Akcigit on Labor Market Shock

  • AI threatens entry‑level coding jobs that have been a major driver of regional tech clusters (Gurgaon, Noida, Mohali).
  • With rigid labor regulations (e.g., provident fund, ESI contributions) and tax incentives favoring capital over labor, the mismatch could exacerbate unemployment for low‑skill workers.

7. Governance, Sovereignty, and International Coordination

Bradford highlighted the geopolitical dimension:

  • AI sovereignty is not fully attainable; the global AI stack depends on cross‑regional supply chains (U.S. chips, Taiwan manufacturing, Dutch lithography, Japanese chemicals, Chinese rare‑earths).
  • Tech‑nationalism can backfire: export controls on semiconductors can trigger retaliatory restrictions, harming all economies.

Her recommendation: Finance ministers of developing countries should pursue strategic collaboration with middle powers, emphasizing shared standards and interoperability, while acknowledging the limits of sovereignty.


8. Rapid‑Fire Forecast (2035)

Each panelist answered: One AI‑driven development outcome we’ll celebrate; one risk we’ll regret ignoring.

SpeakerPositive Outlook (2035)Key Risk (2035)
Ufuq AkcigitHealthcare & education will be radically improved by AI‑enabled diagnostics and personalised learning.Rising market concentration will limit competition & widen inequality.
Anu BradfordEducation will see transformative, inclusive AI tools.Human “dumbing‑down” – over‑reliance on AI erodes critical thinking and creativity.
Michael KremerPublic‑sector AI adoption will boost service delivery (e.g., rapid emergency response).Public‑sector failure to adopt AI → poor‑households excluded; plus over‑prescriptive regulation that mis‑targets developing‑country needs.
Iqbal DhaliwalGovernment productivity will leap‑frog via AI‑enabled services (e.g., instant health‑record linkage).Labor‑market disruption for entry‑level jobs without a policy “dial” to slow transition.
Johannes ZuttTargeted poverty‑reduction using AI‑driven micro‑analytics will become possible.Governance gaps could enable abuse of AI tools, undermining trust.

9. Closing Remarks

Jeanette Rodrigues thanked the panelists, noting the human element—panelists took notes, reflected on trade‑offs, and rejected the notion of outsourcing thinking to models. The session wrapped up with a brief logistical announcement about the next conference segment.

Key Takeaways

  • AI can be a development catalyst when implemented as small‑AI solutions that respect local connectivity, language, and data constraints.
  • Foundational AI models are increasingly concentrated in a few global incumbents; this concentration threatens downstream competition and may impede inclusive diffusion.
  • Public‑good AI (e.g., weather forecasts, digital identity, traffic safety) requires government investment and multilateral coordination because markets alone underprovide them.
  • Rigorous impact evaluation (RCTs, heterogeneous analysis) is essential; pilots must be designed for scale and grounded in user demand.
  • Regulation should be rights‑based yet innovation‑friendly, borrowing from EU principles but tailored to local socio‑economic realities.
  • Labor market disruption is a concrete risk; policies need a “dial” to give workers time to reskill, and labor‑law reforms should avoid penalising hiring while encouraging AI adoption.
  • Trust and institutional fit matter as much as algorithmic accuracy; without training and change‑management, AI tools can underperform or be rejected.
  • Geopolitical interdependence in the AI supply chain means true sovereignty is unattainable; collaborative standards and diversified sourcing are safer strategies.
  • Key sectors poised for AI‑driven gains: health, education, and government service delivery.
  • Systemic risks to monitor: market concentration, erosion of critical thinking, public‑sector inertia, and inadequate procurement competition.

See Also: