Democratizing AI resources and building inclusive AI solutions for India

Abstract

The session explored how India can make artificial‑intelligence technologies affordable, accessible and culturally relevant for its billions of citizens. After an opening address that framed AI as a new public‑utility infrastructure, the panel examined concrete levers—open‑source data, low‑cost compute, multilingual models, skill‑building programmes, responsible governance and public‑private partnerships—required to democratise AI and to prevent the technology from widening existing socioeconomic gaps.

Detailed Summary

  • Moderator’s Opening (Brad Staples) introduced the theme: AI must reach “the unreached” and be more than a purely technical endeavour. Governance, capacity‑building and infrastructure are equally vital.
  • Special address – Mahaveer Singhvi (Additional Secretary, MEA)
    • Asked a provocating question: “Who is AI really for?” – emphasizing that AI should serve farmers, teachers, small entrepreneurs and community health workers, not just large corporations.
    • Described AI as the next “intelligence‑era” infrastructure; if concentrated in a few hands it will amplify privilege, if broadly shared it can expand opportunity.
    • Stated that democratising AI is a developmental imperative for India, given its linguistic diversity, informal sector, and vast rural population.
    • Outlined the Indian‑government’s practical steps:
      • India AI Mission – shared compute platforms for startups, academia and innovators.
      • Public, consent‑based, anonymised data‑sets to fuel innovation while protecting privacy.
      • AI skilling programmes (AI Future Skilling, etc.) targeted at Tier‑2/3 cities and non‑elite institutions.
      • Language‑inclusion initiatives – e.g., “Bhashini” for speech‑to‑text across hundreds of Indian languages.
      • Domain‑specific pilots – precision agriculture, AI‑assisted diagnostics, adaptive learning platforms, intelligent traffic management.
      • Emphasised responsible AI: fairness, transparency, accountability, data‑protection, and alignment with democratic values.
    • Conceded existing challenges: high compute costs, scarcity of high‑quality domain data, talent pipeline gaps, need for patient capital, and risk of algorithmic bias.
    • Called for collective urgency – government, industry, academia, startups, civil society and global partners must co‑own the mission.

2. Panel Discussion

2.1 Skills‑building (Rajita Kulkarni – Shri Shri University)

  • Shri Shri University leverages its link to the Art of Living Foundation and its spiritual founder to embed human‑centric values into AI curricula.
  • Offers 24 emerging‑tech programs (AI, ML, data science, cybersecurity) that are industry‑integrated via partnerships with Intel, IBM, Wipro, etc.
  • Runs an “AI for All” foundation course accessible to every discipline – agriculture, Ayurveda, psychology, architecture, etc.
  • Houses centres of excellence, notably a Centre for Digital Humanism (one of the world’s first) to keep human needs central to AI design.
  • Supports 180 AI‑focused startups through incubators, including a Department‑of‑Science‑&‑Technology‑backed cohort.
  • Pursues inclusive outreach: training programmes in Tier‑2/3/4 cities, targeting women entrepreneurs and rural innovators.

2.2 Business‑level Deployment (Sandeep Saxena – HCLTech)

  • Frames AI as people‑purpose‑planet: it must improve citizen services, drive growth, and respect environmental considerations.
  • Highlights the need for a holistic AI strategy: governance, secure data pipelines, responsible AI frameworks – the “foundation” before mass deployment.
  • Stresses two technical imperatives for India:
    1. India‑specific LLM training – existing large‑language models are Western‑centric; they must be fine‑tuned on Indian languages, cultural contexts and regulatory constraints.
    2. High‑quality, locally‑relevant data – quantity alone is insufficient; data must be curated, validated and reflective of Indian diversity.
  • Uses a child‑learning analogy to illustrate the risk of “toxic” training data producing harmful outputs.
  • Declares confidence that India will lead the AI revolution by embracing its unique scale and diversity.

2.3 Payment‑ecosystem & Financial Inclusion (Sharath Bulusu – Google Pay)

  • Describes the dual nature of the moment – “best of times, worst of times.”
  • Fear side: risk of digital exclusion for people without smartphones or connectivity.
  • Hope side: rapid proliferation of mobile connectivity and internet access, creating a platform for inclusive services.
  • AI for fraud detection: In UPI, AI‑driven alerts prevent ≈ 13,000 crore INR (~$1.5 bn) of fraud annually (≈ 0.03 % of total transaction volume).
  • AI for credit assessment: Modelling creditworthiness from UPI transaction trails permits inclusion of entrepreneurs lacking traditional credit histories.
  • Emphasises AI’s language‑agnostic capability – conversational agents can serve users in any Indian language, even in remote regions (e.g., Nagaland).
  • Warns that technology is neutral; human intent determines whether AI becomes a weapon or a public good.

2.4 Edge‑Computing & Infrastructure (Shoma Foucan – Intel)

  • Positions AI democratization as a multi‑layered ecosystem: hardware, connectivity, platforms and skills.
  • Hardware cost reduction – Intel is lowering chip‑fabrication expenses, improving yield, and investing in photonics & compute express to cut latency for edge devices.
  • Platform stratification – Not all users need high‑end servers; PC‑class devices can run lightweight AI workloads for many classroom‑type applications.
  • Edge‑centric data processing – Promotes hybrid LLMs that keep inference close to the user, limiting data movement to central data‑centers, thereby supporting data sovereignty.
  • Skill pledge – Intel pledged to train 100 000 AI‑proficient users during the summit and urges participants to take the pledge.

2.5 Open Knowledge & Language Resources (Praveen Das – Wikimedia Foundation)

  • Highlights India’s “country‑within‑a‑country” linguistic landscape; only ≈ 1 % of current LLM training data exists for Indic languages.
  • Advocates open‑knowledge platforms (e.g., Wikipedia) as trusted data sources that can be harvested for high‑quality, verifiable corpora.
  • Calls for a “data commons” where volunteers contribute knowledge‑donations (curated, vetted content) to enrich low‑resource language datasets.
  • Mentions a ChatGPT plugin built with Wikimedia to guarantee that AI responses draw from verified sources.
  • Urges collaboration with industry (Google, etc.) to expand Indic language models and to foster community‑driven data creation.

2.6 Closing Reflections (Moderator & Audience)

  • Participants acknowledged optimism (AI solving problems it creates) and concern (risk of AI widening wealth gaps).
  • Emphasised that regulation‑first approaches (as seen in Europe) may stifle innovation, while government‑enabled frameworks (as in India) can balance safety and growth.
  • Consensus that strong leadership, inclusive policy, and cross‑border collaboration are mandatory for a “win‑win” outcome.

3. Q&A / Audience Interaction

  • The transcript does not contain distinct, numbered audience questions; instead, the discussion flowed as a continuous dialogue.
  • Key queries revolved around: data‑set quality, language coverage, cost‑effective compute for SMEs, skilling metrics, and the role of AI in fraud prevention and credit access.

4. Announcements & Commitments

Announced ItemDetails
India AI Mission – shared computeGovernment‑backed cloud/compute resources for startups, academia, NGOs.
Bhashini language platformSpeech‑to‑text & translation services for hundreds of Indian languages.
AI Future Skilling ProgrammeNationwide AI‑skill courses targeting Tier‑2/3 cities and non‑elite institutes.
Intel “AI Skills Pledge”Commitment to train 100 000 participants during the summit.
Wikimedia ChatGPT PluginEnsures AI answers are sourced from verified Wikipedia content.
Google Pay AI‑fraud detectionAI alerts preventing ~13 000 cr INR of fraud per year.

Key Takeaways

  • AI as Public Infrastructure – The Indian government treats AI like electricity or the internet: a foundational utility that must be universally accessible.
  • Inclusive data is essential – High‑quality, consent‑based, multilingual datasets are the single biggest bottleneck; open‑knowledge platforms (e.g., Wikipedia) are crucial sources.
  • Cost‑effective compute & edge solutions – Reducing hardware expenses and bringing inference to the edge (via photonics, hybrid LLMs) are pivotal for Tier‑2/3 reach.
  • Skill‑building at scale – Partnerships between universities, industry and government aim to train > 100 000 AI‑competent citizens, especially outside elite metropolitan centres.
  • Responsible AI governance – Fairness, transparency, data‑sovereignty and ethical frameworks are being codified alongside innovation.
  • AI‑driven financial inclusion – AI‑based fraud detection and credit‑scoring on UPI data already protect billions and expand credit access for small entrepreneurs.
  • Language inclusion unlocks equity – Vernacular speech‑to‑text, translation and LLM fine‑tuning enable services for users in dozens of regional languages, preserving cultural dignity.
  • Public‑private‑civil‑society ecosystem – Successful democratization hinges on coordinated action among government, corporates (Google, Intel, HCLTech, Nokia), academia (Shri Shri University), NGOs (Wikimedia) and industry bodies (FICCI).
  • Urgency and collective ownership – All panelists concurred that without swift, collaborative action AI could exacerbate existing inequalities; the session placed a clear call to act now.

See Also: