AI Horizons: Building Safe and Trusted AI
Detailed Summary
-
Moderator introduced the theme: equitable AI development for both Global North and Global South, the need for responsible data‑sharing, and the role of public‑good data ecosystems.
-
Dr M M Tripathi (NILAT) described NILAT’s network of 56 AI‑lab centres across India, the NILAT Digital University (50 k+ registered students), and concrete AI‑tools built by students:
- Sign‑World – Indian‑Sign‑Language video/audio conversion.
- Audio‑to‑Audio translation for Northeastern languages (e.g., Hindi ↔ Madi Puri).
- Kaleen‑Designer – AI‑assisted pattern generation for traditional carpet makers.
-
Emphasis on open data for research, model‑building, and local‑instance model creation rather than relying on imported LLMs.
2. Data Sovereignty & the Global South (10‑30 min)
- Dr Ramananji (university vice‑chancellor) raised the examination‑grading problem for 5 lakh students (≈30 lakh answer scripts). He argued AI‑assisted scanning & automated marking could improve speed, accuracy, and transparency, but warned of bias against creative answers and the need for a dual‑track (human + AI) validation.
- Kanishk Gaur (India Future Foundation) highlighted India’s data‑center push in the Union Budget (tax holidays, incentives) to create employment, revenue, and AI‑model training capacity. He warned about geopolitical, environmental, and resource‑allocation challenges (electricity, water, location politics).
- Discussion on “cognitive colonialism” – the risk that Indian users become data‑suppliers while foreign firms reap the economic value.
- Natasha Crampton (Microsoft) noted Microsoft’s AI‑for‑Good initiatives and the need for trustworthy, privacy‑preserving tools.
3. Safety, Security, and the Seven‑Layer Model (30‑45 min)
| Layer | Example Concern |
|---|---|
| Data | Quality, bias, provenance |
| Model | Explainability, robustness |
| Content | Disinformation, deep‑fakes |
| Infrastructure | Secure cloud & edge compute |
| Application | Secure APIs, UI‑level guardrails |
| Identity/Access | Authentication, role‑based controls |
| Governance | Policy, audit, liability |
- Mandar Kolkarni introduced the “seven‑by‑four” matrix (seven technical layers × four stakeholder groups: model developers, application builders, deploying organisations, end‑users). He warned against “shadow AI” (unauthorised AI tools proliferating inside enterprises) and advocated a shared‑responsibility approach.
- Dr M M Oberoi (Google) described Google’s Safe‑AI framework (six pillars, with focus on secure‑by‑design infrastructure, automated guardrails such as Model‑Armor and Synth‑ID watermarking, and a shared‑responsibility model).
- Dr Pawan Dugal (law‑tech) criticized existing AI regulations as “six blind men describing an elephant.” He outlined his graded‑liability model (coder, provider, marketer, data‑principal, user each bear proportional responsibility). He also introduced an AI Accountability Framework (January 2026) and called for a “AI Constitution” protecting cognitive rights.
4. Privacy, Consent, and the DPDP Act (45‑55 min)
- Audience member (privacy counsel) asked how to protect personal data when training AI. Panel stressed privacy‑by‑design, data‑labeling, and compliance with India’s DPDP Act (data‑protection law).
- Dr Dugal highlighted data‑co‑ownership – institutions may expose only public‑interest datasets (e.g., anonymised merit lists) while protecting PII (phone numbers, addresses).
- Mandar reinforced that regulatory guardrails need technology (e.g., rapid takedown of deep‑fakes within a 3‑hour window) and that trust between regulators and big‑tech is essential.
5. Bias, Ethics, and Inclusion (55‑70 min)
- Audience raised concerns about algorithmic bias (caste, gender, language) and the absence of women on the panel.
- Dr Dugal responded that bias can be mitigated through transparent data pipelines, localized language models, and inclusive governance.
- Kanishk Gaur added that regional repositories (knowledge‑sharing hubs) and regulatory sandboxes are required to test culturally‑sensitive AI.
6. International Cooperation & Sovereignty (70‑85 min)
- David Wroe argued that global cooperation (shared standards for illicit AI use, e.g., bioterrorism) does not erode sovereign control; rather, it prevents private‑sector dominance by foreign firms.
- Dr Bahl (CERT‑In) described India’s capacity to supply talent but flagged gaps in compute resources and model ownership.
- Mandar stressed that multilateral agreements can provide baseline safeguards while allowing national‑specific implementations (e.g., AI‑generated misinformation counter‑measures).
7. Q&A Highlights (85‑115 min)
| Question | Main Points from Panel |
|---|---|
| Privacy counsel: “How do we ensure privacy in AI tools?” | Emphasised privacy‑by‑design, data labeling, DPDP compliance, and algorithmic safeguards (e.g., synthetic data generation). |
| Audience: “What about algorithmic bias and gender representation?” | Acknowledged lack of female panelists, called for local language models, bias audits, and diverse data collection. |
| Regulation vs. Innovation: “Are we over‑regulating?” | Consensus: minimal, principle‑based regulation is needed; dynamic, not static, rules that evolve with technology. |
| Shared responsibility: “Can regulators and big‑tech truly collaborate?” | Yes, but trust must be built via transparent standards, joint‑incident‑response teams, and industry‑led technical guardrails. |
| Future governance model: “What should it look like for next‑gen tech?” | Suggest a principle‑based, flexible framework with sectoral sandboxes, regional repositories, and global interoperability. |
| Agent‑to‑agent interaction: “Risks of autonomous AI agents?” | Noted emerging behavior when multiple agents interact; called for new legal principles and continuous monitoring. |
8. Closing Remarks (115‑125 min)
- Mandar Kolkarni summed up three essential layers: industry‑driven trustworthy frameworks, government‑mandated regulation, and user education – all must be mutually reinforcing.
- David Wroe stressed that states must be the standard‑setters for AI, not private firms alone.
- Dr Bahl reminded that visible safety indicates invisible trust – the goal is to embed safety silently into infrastructure.
- Kanishk Gaur highlighted the need for art + science balance, invoking traditional Indian wisdom for modern AI governance.
Key Takeaways
- Local data ecosystems matter: India’s NILAT network and university‑level AI labs show how open, community‑owned data can power home‑grown models and reduce dependence on foreign LLMs.
- AI‑assisted assessment can improve transparency in massive education systems, but must retain human oversight to protect creative expression.
- Seven‑layer security model (data → governance) is essential; shared responsibility across developers, deployers, organisations, and end‑users prevents “shadow AI” risks.
- Privacy must be baked in via DPDP‑aligned data labeling, synthetic data, and strict PII exclusion—especially when training large models.
- Bias mitigation requires localized language models, inclusive datasets, and regular audits; diversity in panel composition mirrors the need for inclusive AI design.
- Regulation should be principle‑based, dynamic, and sector‑specific (regulatory sandboxes, regional repositories) to keep pace with rapid AI evolution.
- International cooperation does not threaten sovereignty; rather, it protects against private‑sector dominance by foreign tech giants and promotes common safety standards.
- Graduated liability (“graded‑liability model”) distributes accountability among coders, providers, marketers, data principals, and users, acknowledging that AI harms can be multi‑faceted.
- User education and critical thinking are indispensable—building a citizenry that can spot misinformation is as crucial as technical guardrails.
- Future AI governance must blend art and science, leveraging India’s historic emphasis on balanced approaches while integrating modern technical safeguards.
See Also:
- making-ai-for-everyone-the-case-for-personal-local-multilingual-ai
- sovereign-ai-for-india-designing-the-nations-future-compute-data-and-innovation-ecosystem
- ai-impact-forum-democratising-ai-resources
- india-japan-ai-for-the-world
- democratizing-ai-for-the-last-mile-language-access-and-trust-at-scale
- power-protection-and-progress-legislators-and-the-ai-era
- ai-and-the-future-of-work-employability-skills-and-labour-market-transformation
- inside-indias-frontier-lab-and-its-global-south-impact
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- ai-capacity-building-scaling-knowledge-driving-innovation