Multilingual AI in Universities - Advancing Inclusion Through UA and Stakeholder Insights
Abstract
The panel examined how universities can foster genuine, multilingual participation in the development and deployment of AI systems. Drawing on the Diffusion of Innovation Theory and the Technology Acceptance Model, the discussants highlighted the gap between current, largely “north‑centric” governance frameworks and the need for inclusive, community‑driven processes. They traced the evolution of policy mechanisms from voluntary commitments to enforceable standards, invoked lessons from social‑media regulation, and emphasized the importance of power‑sharing and local expertise when building language models for diverse linguistic contexts.
Detailed Summary
- Welcomed participants to the session “Whose Language, Whose Model?”
- Noted the absence of the scheduled facilitator, Marlena Wisniak (European Center for Not‑for‑Profit Law), and reassured the audience that the discussion would remain robust.
- Framed the central question: How can meaningful multi‑stakeholder participation be built across the AI lifecycle (design → deployment → post‑deployment) to ensure equity, especially for multilingual communities?
- Highlighted two overarching concerns:
- Participation must avoid becoming a “box‑checking” exercise.
- Policy and regulatory levers need teeth—i.e., enforceable standards—to move beyond voluntary promises.
2. Policy & Regulatory Landscape (Jalak Kakkar)
2.1 The Trajectory from Consensus‑Building to Hard Law
- Consensus Phase: In nascent technology domains, stakeholders first seek shared understandings, leading to “soft law” (voluntary commitments, industry codes).
- Risk of Over‑Extension: Citing social‑media regulation, Kakkar warned that prolonged reliance on voluntary measures allows platforms to operate unaccountably, creating a “mess” once harmful outcomes surface.
2.2 Need for a Regulatory “Cut‑off”
- Advocated for a timely transition to harder legal instruments (standards, statutes, binding international commitments).
- Suggested that early regulation could mandate process‑based requirements (e.g., impact assessments, bias audits) rather than prescribing specific technical outcomes.
2.3 Process Integration for Harm Identification
- Emphasized embedding systematic checks at design and deployment stages to surface potential harms, especially those affecting linguistically marginalized groups.
2.4 Call for Immediate Action
- Stressed that without a shift toward enforceable mechanisms within the next few years, the AI governance ecosystem risks repeating the “social‑media” failure mode.
3. Characteristics of Meaningful Multi‑Stakeholder Participation (Danraj Thakkar)
3.1 Power Distribution as the Core Issue
- Participation is fundamentally a question of who holds decision‑making power.
- Effective collaboration must confront entrenched power asymmetries between technology firms, governments, and affected communities.
3.2 Lessons from International Development (Chambers, 1990s)
- Cited Robert Chambers’ “participatory rural appraisal” model: start with the needs and concerns of target communities, allowing them to define relevant technology use‑cases.
- Contrasted this with the prevailing “frontier‑model” approach where companies first deploy a model and later solicit user feedback—a reactive rather than proactive stance.
3.3 Community Expertise in Linguistic Contexts
- Communities possess intimate knowledge of local linguistic nuances, cultural references, and usage patterns—critical for training inclusive multilingual language models.
- Highlighted that early involvement of these experts can shape data selection, bias mitigation strategies, and evaluation metrics.
3.4 Building on Existing Knowledge Rather Than Starting From Scratch
- Encouraged leveraging cross‑sectoral best practices (privacy‑enhancing technologies, human‑rights‑focused AI) that already embed stakeholder participation.
- Stressed that the AI policy arena should not invent new participation frameworks in isolation but adapt proven models from related domains.
4. Synthesis & Emerging Themes
| Theme | Insight |
|---|---|
| From Voluntary to Enforceable | Soft‑law mechanisms are valuable for early consensus but must be time‑bounded; standards and statutes are needed to guarantee accountability. |
| Power Re‑balancing | Genuine inclusion requires shifting decision‑making power to community representatives, not just token consultation. |
| Community‑First Design | Reversing the current order—letting communities articulate needs before model deployment—produces more relevant, equitable AI. |
| Linguistic Expertise | Multilingual AI for universities benefits from embedding local language experts early, ensuring that language models respect diverse scripts, dialects, and sociolinguistic contexts. |
| Cross‑Domain Learning | AI governance can accelerate by adopting participation frameworks from development, privacy, and human‑rights fields. |
5. Open Questions & Points of Debate
- Operationalizing Power Shift – What concrete governance structures (e.g., co‑governance boards, community advisory panels) can materially redistribute authority?
- Defining Enforceable Standards – Which metrics and processes should be codified into law to ensure compliance without stifling innovation?
- Scalability Across Universities – How can a model of community‑first, multilingual AI be scaled to the heterogeneous landscapes of Asian and European higher‑education institutions?
- Funding & Incentives – What economic incentives (grants, public‑private partnerships) are needed to support extensive stakeholder engagement?
6. Q&A Highlights (Excerpt)
-
Question (Audience): How can universities evaluate whether their multilingual AI initiatives truly reflect community needs?
- Danraj Thakkar: Recommend a participatory impact assessment co‑designed with local language scholars and student groups, incorporating both quantitative bias metrics and qualitative feedback loops.
-
Question (Audience): What role should national policy bodies play in enforcing participation standards?
- Jalak Kakkar: National regulators should adopt process‑oriented mandates (e.g., mandatory public consultations before model release) and empower an independent oversight authority to enforce compliance.
Key Takeaways
- Transition Needed: Voluntary, soft‑law commitments are insufficient; enforceable standards and process‑based regulations must be introduced within the next few years to avoid repeating social‑media governance failures.
- Power Distribution: Meaningful participation hinges on shifting decision‑making power to the communities directly impacted by AI systems, especially linguistically diverse groups.
- Community‑First Approach: Effective AI development should start with community‑identified needs and use‑cases rather than retrofitting models after deployment.
- Leverage Existing Frameworks: AI policy can adopt participatory methods proven in international development, privacy, and human‑rights domains, reducing the need to invent new mechanisms from scratch.
- Linguistic Expertise is Critical: Incorporating local language scholars early ensures that multilingual models respect cultural nuances, improving both fairness and utility in university contexts.
- Cross‑Sector Collaboration: Universities, industry, civil society, and government must co‑design governance structures (e.g., advisory boards, impact assessments) to sustain inclusive AI ecosystems.
- Open Challenges Remain: Defining concrete governance bodies, scaling participation models across diverse institutions, and aligning incentives with inclusive outcomes are ongoing challenges that need further research and policy action.
See Also:
- power-protection-and-progress-legislators-and-the-ai-era
- multistakeholder-partnerships-for-thriving-ai-ecosystems
- beyond-the-cloud-the-sovereign-ai-moment
- data-sharing-infrastructures-for-ai-building-for-trust-purpose-and-public-values
- founders-funders-the-india-ai-capital-ecosystem
- empowering-communities-in-the-age-of-advanced-ai-inclusion-and-safety-for-sustainable-development