Safeguarding Children in India’s AI Future: Towards Child-Centric AI Policy and Governance

Abstract

India’s rapid rollout of national AI strategies brings children’s rights, safety and development to the fore. The session opened with Zoe Lambourne’s evidence‑based briefing on the scale and nature of AI‑mediated harms to minors, followed by a presentation of the Ministry of Electronics & Information Technology (MeitY) expert‑engagement group’s recommendations. Panelists from industry, academia, civil society and government debated concrete design‑by‑well‑being principles, legal‑tech frameworks, enforcement bottlenecks, and the need for coordinated multi‑stakeholder mechanisms. The discussion highlighted the urgency of moving from abstract safety rhetoric to actionable, child‑centric governance.

Detailed Summary

  • Moderator (Ashish Jaiman) opened the session, emphasizing the need for “bold, honest, and constructive” dialogue that moves beyond abstract AI‑ethics principles.
  • He thanked the audience, introduced the partnership with Childlight, iSPIRT, and Space2Grow, and announced that the expert‑engagement group had been set up by MeitY to produce policy recommendations on AI and child safety.

2. Research Presentation – “Child‑Centric AI”

Speaker: Zoe Lambourne (Childlight)

Key PointsDetails
Scope of the problemAI can be a “golden thread” for child development and a conduit for abuse.
Types of AI‑enabled abuse• Persuasive disinformation, dangerous advice
• AI‑generated sexual‑abuse material (real & synthetic, “nudification”)
• Deep‑fakes targeting minors
Global statistics• > 300 million children worldwide suffered technology‑facilitated abuse in 2024.
• 1,325 % year‑on‑year increase in AI‑generated sexual‑abuse material.
India‑specific survey• 410 Indian youth surveyed.
Perception of AI – majority view AI as both beneficial and risky; only 25 % feel “online life is safe”.
Gender gap – young women report higher perceived risk, harassment, and image‑based abuse.
Responsibility attribution48 % of respondents place primary safety responsibility on technology companies, followed by parents/carers and the government.
Policy insightSafety cannot stop at product design; it must include continuous monitoring, rapid response helplines, and compensation mechanisms.
Long‑term impactAbuse can affect mental health, relationships, parenting, and economic participation for decades.
Call to actionRe‑frame “child safety” to “child well‑being” and embed it throughout AI policy, not as a token phrase but as a design and governance mantra.

3. Expert‑Engagement Group – Process & High‑Level Recommendations

Speaker: Gaurav Aggarwal (iSPIRT, chair of the group)

  • Described a multi‑step process: formation of a knowledge‑partner consortium (iSPIRT, Space2Grow, Childlight), stakeholder consultations (tech firms, lawyers, law‑enforcement, civil‑society), a pre‑summit event (27 Oct) with industry inputs, and a final draft submission to MeitY.

  • Highlighted three structural recommendations that would shape the forthcoming AI‑policy declaration for India:

    1. Child‑Safety Solutions Observatory – a national repository that aggregates innovations, best practices, and impact‑assessment tools.
    2. Global‑South Working Group – a collaborative network to position India’s experience as a model for other emerging economies.
    3. Child‑Safety Innovation Sandbox – a challenge‑based platform (Q2 2025) to prototype AI‑driven digital‑harm mitigation tools, overseen by a Youth Safety Advisory Council.
  • Stressed that children must be at the centre of every solution – not merely a compliance checkbox.

4. Panelist Contributions

4.1. Chitra Iyer (Space2Grow) – Re‑naming & Legal‑Tech Architecture

  • Terminology shift: Replace “child safety” with “child well‑being” – safety is a patriarchal, restrictive term; well‑being captures the holistic benefits of AI (e.g., educational access in remote villages).
  • Inclusive governance: Emphasised the need to involve parents, carers, and community in policy formulation; without their consent, any top‑down regulation would be illegitimate.
  • Techno‑legal framework: Proposed a “well‑being‑by‑design” approach akin to India’s UPI model – a self‑executing technical system that does not rely on voluntary corporate compliance.
  • Stakeholder coordination: Called for a single, cross‑sector “observatory” to avoid fragmented efforts.

4.2. Ashish Jaiman (Snap – Moderator) – Safety‑by‑Design in a Messaging Platform

  • Product‑level safeguards:
    Bidirectional friend acceptance to block unsolicited contacts.
    Location turned off by default to reduce privacy risk.
    Ephemeral messaging mirroring real‑life conversation decay.
    Family Center – a parental‑visibility dashboard that respects user privacy while providing oversight.
  • Iterative design: Snap continuously rolls out new safety features every quarter to keep pace with the rapidly evolving AI threat landscape.
  • AI‑risk reality: Early adopters of any new AI will see both academic and adversarial usage; thus, proactive design is essential.

4.3. Uthara Ganesh (Snap) – Transparency & Edge‑Computing

  • AI literacy focus: Position AI as a tool, not a magical entity; teach children to de‑construct black‑box models.
  • Local processing: All AI inference runs on‑device, never sending raw data to the cloud; no login or personal identifiers leave the child’s device.
  • Model cards: Use transparent documentation (training data provenance, geographic diversity) for any pre‑trained models employed in educational tools.
  • No anthropomorphising: Snap avoids giving AI a “human” persona to prevent unhealthy emotional bonds.

4.4. Akash Pugalia (Teleperformance) – Operational Realities in India

  • Enforcement gaps: Inconsistent law enforcement across states, multiple languages, and low digital‑literacy create a fragile safety net.
  • Identity‑mark requirement: AI‑generated content should carry a visible watermark/metadata so children can recognise synthetic media.
  • Age‑appropriate design code: India needs a stand‑alone children‑online‑safety statute, separate from the broader Data Protection and IT Rules, to address the specificities of minors.
  • Rapid‑response cycles: Call for a multi‑stakeholder rapid‑action protocol (industry, law‑enforcement, NGOs) to cut takedown times from days to hours.

4.5. Maya Shermon (GPAI, Israel Embassy) – Global Benchmarks & Cooperation

  • Learning from abroad: Highlighted the UN Convention on Cybercrime (UNCyber) and the need for India to become a signatory to enable cross‑border data sharing and investigations.
  • Scenario‑based standards: Urged adoption of nuanced, context‑sensitive benchmarks rather than one‑size‑fits‑all metrics.
  • Global‑South narrative: Position India as a leader for emerging economies in child‑centric AI policy; share best‑practices through the Global‑South Working Group.
  • Fast‑track takedowns: Recent amendment reduced the mandatory removal window for child‑sexual‑abuse material from 24 hours to 2 hours under the IT Rules.
  • Duty of care: Emphasised that platforms must bear legal responsibility for harmful AI‑generated content; “burden of proof” should shift to the industry.
  • Public‑awareness: Mandatory digital‑safety curricula for parents, teachers, and children to build a “culture of vigilance”.

4.7. Shireen Vakil (Space2Grow) – School Safeguarding & Child‑to‑Child Harm

  • Peer‑to‑peer risk: Stressed that children can also be perpetrators (bullying, distribution of deep‑fakes). Schools must teach the difference between jokes and crimes.
  • Legal literacy: Introduced Juvenile Justice Act basics to students so they understand personal accountability.
  • Rapid incident response: Highlighted that the online‑crime portal (cybercrime.gov.in) often delays action; urged direct police engagement for faster remediation.

4.8. Audience Q&A Highlights

QuestionerCore IssueRepresentative Answer
Pallavi (Ekigai Law) – to UtharaBalancing safety vs. privacy for AI‑enabled bots (e.g., self‑harm prompts)Uthara outlined on‑device risk‑assessment and optional parental alerts triggered only when a predefined risk threshold is crossed, preserving child privacy while enabling timely intervention.
Anonymous (school practitioner) – to Chitra & NappinaiHow can schools act when cyber‑abuse occurs given slow legal processes?Chitra suggested pre‑emptive digital‑literacy programs and a rapid‑takedown protocol with local police; Nappinai added that the 2‑hour removal rule can be invoked if the content is flagged by the school’s IT admin.
General audience – to MayaWhat global standards should India adopt?Maya recommended UN‑OCDE child‑rights‑impact‑assessment (CRIA) framework and to adapt UNICEF & Brookings AI‑ethics guidelines to Indian linguistic and cultural contexts.
Attendee – to AkashHow to operationalise safeguards across India’s linguistic diversity?Akash called for regional language‑specific AI safety modules and partnership with local NGOs for community‑driven monitoring.
Attendee – to Panel (collectively)Need for a global coordination mechanism for AI‑driven child harms?Panelists agreed on a global‑south coordination hub, citing the recent UN‑ODC cybercrime convention as a model for cross‑border enforcement.

5. Closing Remarks & Administrative Announcements

  • Photograph – Panelists and organizers were invited for a group photo.
  • Acknowledgments – Thanks to IGPP, Childlight, iSPIRT, Space2Grow, and the moderator Ashish Jaiman.
  • Next session teaser – The audience was invited to stay for the upcoming session on “AI‑Powered Ports”.
  • Momento distribution – Participants would receive a digital “moment” summarising the discussion.

Key Takeaways

  • Scale of Harm: In 2024, > 300 million children globally experienced technology‑facilitated abuse; AI‑generated sexual‑abuse material surged by 1,325 % year‑on‑year.
  • Survey Insight: Only 25 % of Indian youth feel “online life is safe”; 48 % hold tech companies primarily responsible for safety.
  • Terminology Shift: Panelists argue that “child well‑being” is a more holistic, rights‑based framing than “child safety”.
  • Design‑by‑Well‑Being: Snap’s product‑level safeguards (bidirectional friend acceptance, location‑off‑by‑default, family‑center dashboard) illustrate concrete privacy‑first, safety‑by‑design practices.
  • Transparency & Edge Computing: AI inference should be on‑device with clear model‑cards and no anthropomorphising to keep children aware of AI’s mechanical nature.
  • Legal‑Tech Blueprint: A techno‑legal framework modelled on India’s UPI system is proposed to embed well‑being into the infrastructure, reducing reliance on voluntary corporate compliance.
  • Rapid‑Response Enforcement: Recent regulatory change now mandates 2‑hour takedown of child‑sexual‑abuse material; however, operational bottlenecks persist in police and cyber‑crime portals.
  • Multi‑Stakeholder Observatory: The recommended Child‑Safety Solutions Observatory, Global‑South Working Group, and Innovation Sandbox aim to collate best practices, foster cross‑border learning, and accelerate safe‑tech prototypes.
  • Global Benchmarks: Adoption of UN‑ODC cybercrime convention, UNICEF, OECD, Brookings, and CRIA guidelines is essential for scenario‑based, culturally‑relevant standards.
  • Education & Literacy: Effective safeguarding requires AI literacy for children, digital‑safety curricula for parents/teachers, and youth advisory councils to co‑design policies.
  • Policy Urgency: India needs a stand‑alone children‑online‑safety statute (age‑appropriate design code, identity‑mark requirement for synthetic media) to complement broader data‑protection legislation.

These points capture the essential insights, data, and actionable recommendations that emerged from the session on safeguarding children within India’s evolving AI ecosystem.

See Also: