Making AI for everyone: The case for personal, local, multilingual AI
Detailed Summary
Moderator (Sushant Kumar) opened the session by framing the problem: AI needs to become personal, local and multilingual so that every community can shape its own tools, keep its data private, and operate without internet connectivity.
He announced a collaborative effort between Bhashini (India’s open‑source language platform) and Current AI, coordinated by Kalpa Impact, that would culminate in the unveiling of a hand‑held, open‑source AI inference device.
Key Insight – The partnership aims to “seize the opportunities given to us by technology if the people most affected by the problems are the ones leading the charge.”
A short concept video was played (audio glitches in the transcript). It dramatized everyday scenarios where an offline, multilingual device could empower users who lack internet or who speak under‑represented languages.
2. Live Demonstration of the Prototype
2.1. Participants
- Andrew Tergis (Current AI – Lead Engineer)
- Shailendra Pal Singh (Bhashini – General Manager)
2.2. Demonstration Narrative
-
Device Purpose – The prototype is not a single‑purpose gadget; it is a general‑purpose inference platform that any developer can load with models and applications.
-
Flagship Application – “Here the World”
- Target user: a vision‑impaired person.
- Workflow:
- Capture image → local vision model extracts visual context.
- Automatic Speech Recognition (ASR) converts the user’s spoken query (in a native language) to text.
- Neural Machine Translation (NMT) turns the query into English for the large language model (LLM).
- LLM generates an answer based on the image + query.
- NMT translates the answer back to the user’s language.
- Text‑to‑Speech (TTS) vocalises the answer.
-
Demo Run‑through
- The audience asked a Hindi question about the objects on the table. The system identified “Twix, Milky Way, KitKat” and spoke the answer back in Hindi.
- Andrew highlighted model quantisation: despite typical accuracy loss, the team achieved no measurable degradation thanks to aggressive optimisation.
-
Technical Stack – Built on an NVIDIA Jetson module (currently the processing platform). The team is designing the software stack to be hardware‑agnostic so future versions could run on other edge processors.
-
Model Count – At that moment four to five models (ASR, NMT, LLM, TTS) were operational on‑device, a notable achievement for an offline system.
Announcement – The device was cleared through customs after a logistical delay, confirming that a fully offline, multilingual AI hardware can be physically shipped and deployed.
2.3. Audience Reaction
The audience expressed enthusiasm, praising the engineering effort and noting the significance of offline, privacy‑preserving AI for remote or disaster‑prone regions.
3. Fireside Chat #1 – Personal, Local, Multilingual AI
| Speaker | Role |
|---|---|
| Ayah Bdeir | CEO, Current AI |
| Shri Amitabh Nag | CEO, Bhashini |
3.1. Origins & Motivations
-
Bhashini began in 2023 with a single‑room office and one employee. Its flagship achievement: 350 AI models covering 22 Indian languages, built from scratch despite a lack of digitised corpora. They gathered data “by brute force”, partnering with translators across the country.
-
Current AI was founded after the AI Action Summit 2023 in Paris. It is a public‑private partnership whose mission is to create public‑interest AI and to “rally a global community” around open‑source stacks.
-
Both founders stressed that the dominant commercial AI players are built on massive financial and data resources, making it hard for alternative, community‑driven solutions to compete.
3.2. Technical Accomplishments
-
Bhashini runs ≈15 million inferences a day on a 200‑GPU farm, with real‑time dashboards showing latency, usage patterns, and user geography.
-
Current AI’s open‑hardware ethos mirrors the Linux model: a freely modifiable stack that anyone can extend.
3.3. Personal Language Background
- Ayah Bdeir speaks Arabic (native), French, English, and is learning Spanish, illustrating the multilingual reality of the team.
3.4. Vision for the Future
- Inclusion – A device that works offline, is compact, and supports 36 languages (including newly digitised tribal languages like Bheeli) ensures “no person, no language left behind.”
- Scalability – Plans for a smaller form‑factor, mesh networking of multiple devices, and potential solar‑powered micro‑data‑centers.
- Use‑Case Diversity – From farmers diagnosing crop issues to children’s toys that keep data on‑device, the possibilities are described as “infinite.”
3.5. Concerns
- Embodied AI: Commercial devices (e.g., Meta glasses, Amazon Alexa) constantly stream data to the cloud, often trained on Western‑language corpora, raising privacy and cultural‑bias concerns.
- Lock‑in: When a single hardware platform dominates the AI stack, developers become dependent on proprietary APIs, limiting local innovation.
Key Insight – Open‑source hardware offers a “Linux‑like freedom” that can counteract such lock‑in.
4. Fireside Chat #2 – Policy, Culture & Sovereignty
| Speaker | Role |
|---|---|
| Martin Tisné | Chair, Current AI & Leader, AI Collaborative |
| Shri Abhishek Singh | Senior Official, MeitY, Government of India |
| Anne Bouverot | Senior Official, Government of France |
| Shri Amitabh Nag | CEO, Bhashini (participated in earlier chat but contributed again) |
4.1. Vision of a Multicultural AI Ecosystem
-
Martin Tisné framed AI as a tool for citizens and small enterprises rather than a technology for its own sake. He cited the demo as proof that offline AI can empower people without internet or dominant language skills (e.g., a person who cannot read a CAPTCHA).
-
Abhishek Singh highlighted India’s linguistic diversity ( >100 languages) and the need to embed cultural heritage, folklore, and tribal knowledge into AI training data to avoid “hallucinations” that erase local realities.
4.2. Data Reciprocity & Community Rights
-
Question – Should communities retain rights over data they provide, and should they benefit when that data powers AI?
-
Responses
- Abhishek Singh: Reciprocity must be context‑specific. Agricultural data could be shared to improve farming advice; health data demands stricter privacy and may be shared only under explicit consent.
- Martin Tisné: Emphasised a tension between open‑source data (maximising cultural representation) and individual/artist rights (need for compensation or opt‑out). Suggested mechanisms like right of opposition for living creators.
4.3. Indigenous Data Sovereignty
-
Reference to Māori data sovereignty in New Zealand: cultural data belongs to the community, requiring collective governance over how it is used.
-
The panel agreed that trustworthy third‑party custodians (e.g., national research institutions) are essential for balancing openness with privacy.
4.4. Open‑Source vs. Controlled Data Governance
- Martin counselled that each dataset should be evaluated for its primary purpose (public‑interest vs. private profit).
- Abhishek reiterated that privacy‑preserving techniques (anonymisation, differential privacy) are needed when data is repurposed for commercial gain.
4.5. Sovereignty in AI
- Definition – Sovereignty means full control over the five layers of AI: data, models, hardware, infrastructure, and applications.
- Abhishek claimed that India is moving toward an end‑to‑end sovereign stack (own data‑centres, chip design, fab capabilities).
- Martin added that no single nation, not even the U.S., owns the entire supply chain, underscoring the need for choice and alternatives.
4.6. France‑India Collaboration
- Both officials saw joint research, policy harmonisation, and co‑funded innovation challenges as the way forward.
- Anne Bouverot drew a parallel with French media quotas (mandatory French‑language content) and suggested similar norms for AI‑generated cultural output could protect national heritage.
- Martin highlighted the “year of joint innovation” announced by the French and Indian presidents, promising university‑level, business‑level, and governmental partnerships.
5. Announcement: India AI Innovation Challenge
-
Speaker – Shri Amitabh Nag (Bhashini) together with Martin Tisné and Sushant Kumar.
-
Key Points
- Open‑source prototype (the device demonstrated earlier) will be released publicly, with full hardware schematics and software stack.
- Challenge Launch Date – 25 February (submissions open on the Bhashini website).
- Prize Pool – ₹ 1,10,000 for the winning solution (announced as “110 k prize”).
- Support – Ongoing quantisation assistance, model‑enrichment help, and mentorship from both Current AI and Bhashini engineers.
- Scope of Applications – Anything from hardware miniaturisation, sector‑specific AI services (agriculture, education, health), to software‑only extensions (new language modules, UI/UX designs).
-
Call to Action – “Build, hack, and iterate – let the community turn this prototype into a global public good that works for anyone, anywhere.”
6. Closing Remarks
- The moderator thanked all participants, underscored the “beginning of a journey” toward personal, local, multilingual AI, and invited attendees to continue the conversation through the upcoming challenge and future collaborations.
Key Takeaways
- Open‑source hardware + multilingual language models can deliver offline AI that respects privacy and works in connectivity‑starved environments.
- The prototype demonstrates a full pipeline (ASR → translation → LLM → translation → TTS) entirely on‑device, with no observable loss of accuracy after aggressive quantisation.
- Bhashini’s achievement: 350 models across 22 Indian languages, running ≈15 M inferences daily on a 200‑GPU farm, showcases India’s capacity for large‑scale open‑source language infrastructure.
- Current AI’s philosophy: build public‑interest AI through collaborative, vertically‑integrated, open stacks—mirroring the Linux model to avoid lock‑in.
- Cultural preservation requires digitising tribal languages and embedding indigenous knowledge (e.g., pest‑identification insights) into training data.
- Data reciprocity must be context‑specific: community‑generated data should benefit the contributors, with opt‑out rights for living creators and strict privacy safeguards for sensitive domains (health, personal data).
- AI sovereignty is envisioned as control over the entire stack (data, models, chips, infrastructure, applications). Both India and France are moving toward greater self‑reliance, though complete independence is a long‑term goal.
- Policy considerations: France’s media‑quota model could inspire AI‑content quotas to protect national cultural output; India’s MeitY is crafting frameworks for open but responsible data sharing.
- France‑India partnership: Joint research, shared funding, and coordinated standards can serve as a model for multilateral, culturally inclusive AI governance.
- India AI Innovation Challenge (launch 25 Feb) invites global developers to extend the open‑source device, with a ₹ 1.1 L prize, technical mentorship, and a promise of public‑good licensing.
- The session reaffirmed that AI for everyone is not a distant ideal but a tangible, hardware‑driven reality already being demonstrated and open for community‑led expansion.
See Also:
- a-billion-voices-one-ai-how-language-tech-transforms-nations
- ai-horizons-building-safe-and-trusted-ai
- launch-of-ai-impact-casebooks-health-education
- keynote-i-to-the-power-of-ai-an-8-year-old-on-aspiring-india-impacting-the-world
- safe-ai-building-shared-trust-and-accountability-infrastructure
- building-trustworthy-ai-foundations-and-practical-pathways