AI and Children: Turning Principles into Practice for Safe, Inclusive, and Empowering AI

Abstract

The flagship session gathered senior policymakers, industry leaders, a UNICEF team, the national child‑rights commission, and a teenage youth‑advocate to move from abstract child‑centred AI principles to concrete actions. Opening remarks framed the urgency: India’s 250 million school‑going children are already exposed to AI‑driven tools, and safe, inclusive, and empowering AI is a governance imperative. A 13‑year‑old youth‑advocate presented a globally‑sourced statement with ten specific demands. The panel then explored practical levers—national regulatory frameworks, industry design‑by‑default safeguards, data‑governance, and inclusive AI literacy—while highlighting concrete pilots, emerging risks (e.g., AI‑driven emotional dependency) and the need for a whole‑society approach. The session closed with a collective call for a post‑summit declaration committing leaders to protect and empower children in the AI era.

Detailed Summary

  • Director General (Ms. Jyoti Vij, FICCI) thanked the Secretary of Electronics & IT, the Government of India, and UNICEF for enabling the discussion.

  • Prof. Ajay Kumar Sood (Principal Scientific Advisor, GoI) was invited to give welcome remarks. He highlighted:

    • The session’s focus on translating child‑centred AI principles into practice.
    • The dual mission of AI for children: safety and empowerment.
    • Alignment with the National Education Policy 2020, which embeds AI and computational thinking from Grade 3 onward.
    • The scale of the challenge: >250 million Indian school‑going children, with ~1/3 of internet users under 18.
    • The necessity of moving beyond “guiding documents” to embedded product design, procurement, regulatory standards, data‑governance, and capacity‑building.
  • Prof. Ajay Kumar Sood reiterated that the AI agenda is led by the Ministry of Electronics & IT and the Ministry of Education, stressing the need for concrete, actionable pathways.

2. Youth Advocate Address – “Children & Youth Statement” (10‑15 min)

  • Prasiddhi Singh (13 y/o, UNICEF India Youth Advocate) delivered a powerful statement, summarised below:

    1. Inclusive Design – AI must be trained on diverse, representative data covering caste, class, gender, geography, and race.
    2. Data Protection & Safety – Age‑appropriate safeguards against harmful AI‑generated content.
    3. Education & Capacity Building – Integrate AI literacy in schools so children can engage critically and ethically.
    4. Parental Guidance – Tools for parents to protect children while respecting privacy.
    5. Opportunities for Youth – Incubation, mentorship, and project‑based AI initiatives.
    6. Skill Development for Employment – Build AI‑ready skills for future jobs.
    7. Mitigate Job‑Displacement Risks – Ensure automated screening does not unfairly exclude marginalized youth.
    8. Transparency & Human Oversight – Particularly in AI‑driven recruitment.
    9. Youth as AI Ambassadors – Direct involvement of children in the AI lifecycle.
    10. Call to Action – “Are you ready to build a smart, fair future with us?”
  • The statement was framed as a mandate rather than a mere document, built on inputs from ~54 000 young people across 184 countries and focused group discussions in India.

3. Government Briefing – Ajay Kumar Sood (15‑25 min)

  • Presented latest data on AI usage among children:

    • UK: 67 % of teens use AI.
    • US: ~40 % of elementary‑age children use AI‑powered educational tools.
    • India: Data still emerging, but 85.5 % of households own a smartphone and 86 % have internet access.
  • Highlighted risks: unknown long‑term effects of AI companions, over‑reliance, erosion of critical thinking, and permanent digital footprints.

  • Described India’s governance actions:

    • AI Mission – building capability while strengthening safeguards.
    • Techno‑legal framework (Nov 2025) – a “white paper” on AI safety, building on the earlier DPI 1.0 experience.
    • Recent regulation requiring synthetic‑content labeling and prohibiting harmful content targeting children.
  • Emphasised empowerment:

    • AI must not only be a user‑tool but also a creator‑platform for children.
    • Integration of AI literacy into school curricula (recent Ministry of Education programme).
    • The “Padai with AI” pilot in Tonk, Rajasthan (math scores rose to 96 % within six weeks).
  • Concluded with a call for multi‑stakeholder collaboration to embed child‑specific safeguards into the national AI governance ecosystem.

4. Panel Introduction (25‑30 min)

  • Henrietta Ridley (UNICEF) acted as moderator. She noted her limited time (23 minutes) and set up the central question: “How do we design and govern AI so that it truly empowers children while safeguarding their rights and wellbeing?”

  • Panelists announced:

    • May Ellen Stenner (Norway) – diplomatic perspective.
    • Thomas Davin (UNICEF) – global policy view.
    • Sanjeev Sharma (NCPCR) – child‑rights enforcement.
    • Gokul Subramaniam (Intel) – technology implementation.
    • Hector de Rivoire (Microsoft) – corporate responsible‑AI practice.
    • Ajay Vij (Accenture) – industry skilling outlook (later referred to as “Mr Vij”).

5. Panel Contributions

5.1 Norway’s Vision – May Ellen Stenner (30‑35 min)

  • Principles: AI must support (not undermine) children’s safety, participation, learning, and wellbeing, aligned with the UN Convention on the Rights of the Child.
  • National Strategy (5‑year focus):
    • Raise digital competence for parents and children.
    • Age‑appropriate regulations: higher minimum age for social‑media (currently 13 → 15), stricter GDPR‑age of consent.
    • Screen‑time guidance and proposals to ban mobile phones in schools.
    • Strengthen penalties for marketing exploiting children.
    • Emphasised the need to remove algorithms and screens from the core of childhood.

5.2 Global Pillars – Thomas Davin (35‑40 min)

Outlined three core principles for child‑centred AI:

  1. Safety by Design – age‑appropriate safeguards, bias mitigation, data protection, built‑in accountability and redress mechanisms.
  2. Inclusion by Default – AI must serve every child, not just the digitally privileged; local language, disability support, offline capabilities.
  3. Empowerment – children as creators and governors of AI, not merely passive users; requires AI literacy for both children and parents.

Stressed an all‑of‑society approach requiring coordinated governance, industry standards, and civil‑society participation.

5.3 Child‑Rights Enforcement – Sanjeev Sharma (40‑45 min)

  • Shared two contrasting stories: an AI‑chatbot‑induced suicide vs. an AI‑enabled Anganwadi weight‑measurement app that improved nutrition tracking.

  • Statutory powers of NCPCR: ability to demand takedowns, initiate prosecutions, and enforce protective measures.

  • Key interventions:

    • Development of an AI‑based CSAM detection tool to identify platforms circulating child sexual abuse material (64 k cases in 2024).
    • AI‑assisted mental‑health triage: pilot shows 89 % accuracy in detecting psychological distress, addressing volunteer shortage on the NCPCR’s tele‑therapy line.
    • Collaboration with the Ministry of Education on responsible AI tools for class‑3 curriculum.
  • Highlighted the necessity of early‑stage safeguards and data‑literacy to prevent digital‑footprint harms.

5.4 Technology Priorities – Gokul Subramaniam (45‑50 min)

  • Emphasised responsible AI as foundational, not an add‑on.

  • Three practical priorities:

    1. Protect data, models, and users – especially the child user; includes privacy‑preserving computing.
    2. Affordable compute for co‑creation – enable children to build AI, not just consume it.
    3. Human‑in‑the‑loop scaffolding – e.g., a parent acting as co‑pilot, questioning AI outputs (“Why did you show a male doctor?”).
  • Warned against AI as “digital babysitter” – the technology must stay a tool, not a replacement for human judgment.

5.5 Corporate Responsible‑AI – Hector de Rivoire (50‑55 min)

  • Microsoft’s Responsible AI Principles (since 2018): privacy, fairness, reliability, safety, security, transparency, inclusion.
  • Risk‑evaluation science: focusing on emotional dependency, over‑reliance, anthropomorphism, and “sycophantic” behavior of chatbots.
  • Launched a Microsoft Youth Council to involve children directly in product safety design – an invitation was extended to all attendees.

5.6 Skills & Skilling for the Global South – Ajay Vij (55‑58 min)

  • Framed AI as a democratizer of knowledge: anyone can learn, without a formal degree, provided they have access.

  • Mantra: “Learn to learn.” Embedding meta‑learning into curricula.

  • Highlighted two critical levers:

    1. Access – broadband, devices, low‑cost platforms.
    2. Inclusion – ensuring marginalised communities reach the digital resources.
  • Stressed that AI can reshape future workforce if children are engaged early.

5.7 Inclusion of Marginalised Youth – Open‑Floor (58‑63 min)

  • Unidentified speaker (labelled “Mr Pratap”) raised the question of rural youth, women, and first‑generation learners.

  • Key points raised:

    • Need for democratic data creation – incorporate oral languages, community narratives, and lived experiences of marginalised groups.
    • AI‑augmented public‑service roles (e.g., ASHA workers, rural nurses) to improve service delivery.
    • Avoiding a single narrative of “what AI should be”; instead, co‑design with grassroots voices.

6. Closing & Call to Action (63‑70 min)

  • Cynthia McAfee (UNICEF India) summarized the session, thanked all participants, and issued a call for a joint declaration at the summit’s concluding days (19‑20 Oct). The declaration should:

    • Commit leaders to protect and empower children in AI development and deployment.
    • Establish accountability mechanisms across government, industry, and civil society.
  • The moderator Henrietta Ridley thanked the panel for staying within the time limit and praised the energy in the hall.

  • The session ended with a request for a group photograph of all speakers.

Key Takeaways

  • Scale of Exposure – Over 85 % of Indian households have internet‑enabled devices, putting hundreds of millions of children in the reach of AI‑driven services.

  • Data‑Driven Risks – Lack of long‑term research on AI’s impact on child development, potential for emotional dependency, and permanent digital footprints demand immediate safeguards.

  • Policy Momentum – India’s AI Mission, techno‑legal framework, and recent synthetic‑content labeling regulation show a rapid policy response, but implementation details remain critical.

  • Youth‑Led Demands – The 10‑point statement from Prasiddhi Singh demands inclusive data, strong privacy, AI literacy, parental tools, youth‑centric opportunities, and transparency – a concrete blueprint for stakeholders.

  • Three Pillars for Safe AI (UNICEF): Safety by design, Inclusion by default, Empowerment through co‑creation.

  • Concrete Pilots – “Padai with AI” (Rajasthan math boost), AI‑enabled Anganwadi weight tracking, and NCPCR’s CSAM‑detection tool illustrate early successes and the need for scaling.

  • Industry Commitments – Intel stresses data‑model‑user protection and affordable compute; Microsoft embeds responsible‑AI principles and launches a Youth Council; Accenture highlights democratization of skills.

  • Governance & Accountability – Norway’s strategy (higher social‑media age, stricter GDPR consent, school phone bans) offers actionable regulatory ideas for other nations.

  • Inclusion Imperative – Rural, women, and first‑generation learners risk being left behind unless AI datasets incorporate oral languages, community narratives, and offline capabilities.

  • Call for Collective Declaration – All participants agreed on the necessity of a post‑summit declaration binding leaders to protect children’s rights while enabling AI‑driven empowerment, with built‑in accountability mechanisms.


See Also: