AI and Children’s Safety and Wellbeing
Abstract
The session examined how artificial intelligence can be shaped to protect and promote children’s safety, mental health, and development rather than expose them to new harms. After a dynamic opening from a teenage AI entrepreneur, the panel explored four main themes: (1) what AI‑enabled literacy should look like in schools; (2) how AI can expand agency while avoiding over‑dependence; (3) concrete safety‑by‑design mechanisms (age‑verification, parental controls, content safeguards); and (4) the need for inclusive, culturally aware governance frameworks. The discussion combined practical examples, research findings, and policy recommendations, ending with a heartfelt call to keep children at the centre of AI design.
Detailed Summary
Raul John Aju (Rahul) opened the session with an energetic, informal talk that blended personal anecdotes with a demonstration of his own AI projects.
-
Curiosity & Critical Thinking – Rahul recalled his father’s mantra “question everything,” explaining that his early habit of probing why chairs have four legs or why birds fly helped him develop a skeptical mindset. He emphasized that today’s children must be equipped to distinguish factual from fabricated information, a skill that is increasingly hard for adults, let alone kids, in the age of generative AI.
-
AI Safety Tool – “Rescue AI” – Rahul introduced a prototype he built that can ingest a contract or Terms‑and‑Conditions (T&C) document, flag high‑risk or legally problematic clauses, and advise users whether to proceed. The tool was originally designed for legal professionals but he has adapted it for broader consumer awareness.
-
Education Gap – He argued that schools teach children how to use machines (e.g., calculators) before they teach how to think. He advocated a “natural‑intelligence‑first” approach: kids should master fundamental concepts before leaning on AI for writing essays, singing, or solving equations.
-
Personal Impact – Rahul cited his own initiative, Thinkcraft Academy, a free course that taught 700 000 learners how to build and fine‑tune large language models (LLMs) within 30 days. He used the success of this program to illustrate the massive appetite for AI literacy.
-
Call to Action – Concluding, Rahul asked the audience to keep the focus on humanity: “AI will not take your job, but someone using AI will.” He urged peers and adults to remain curious, collaborative, and ethically grounded.
2. Transition to the Panel
The moderator (Urvashi Aneja) thanked Rahul for his “thoughtful and energising address” and introduced the two co‑moderators: Thomas Davin (UNICEF) and Urvashi Aneja (Digital Futures Lab). She then invited the panelists:
- Baroness Joanna Shields (RAIIF)
- Maria Bielikova (Kempelen Institute)
- Chris Lehane (OpenAI)
- Tom Hall (LEGO Education)
3. Theme 1 – AI Literacy and Classroom Innovation
3.1 Tom Hall (LEGO Education)
-
Current Landscape – Teachers are excited about AI (≈80 % positive) but only 41 % feel prepared to teach it.
-
Definition of AI Literacy – Giving children a “screwdriver” to open the “black‑box” of AI: understanding data, sensing, predictability, bias, and the limits of models.
-
Pedagogical Priorities –
- Blend real‑world, problem‑based learning with AI tools.
- Emphasise creativity over replacement (AI should augment, not supplant, human imagination).
- Ensure inclusive curricula that reflect diverse learners, especially girls and under‑represented groups.
-
Policy & Capacity Building – LEGO Education collaborates with policymakers to embed AI literacy in curricula, but stresses a need for teacher‑centric toolkits and ongoing professional development.
3.2 Chris Lehane (OpenAI)
-
Individualised Tutoring – OpenAI envisions an AI tutor that adapts pace, modality and difficulty to each child’s learning style, unlocking potential for children who learn differently.
-
Agency & the Industrial‑Age Legacy – Noted that U.S. K‑12 systems were built for an industrial‑age labour model. AI can recalibrate this by empowering agency, allowing children to become creators rather than passive recipients.
-
Safety‑by‑Design – OpenAI’s age‑assurance system defaults to an under‑18 model when age is uncertain, blocking violent, sexual, or self‑harm content. Additional measures: robust parental controls, real‑time alerts, prohibition of targeted advertising to minors, and external audit (e.g., by state attorneys general).
-
Global Adaptation – Acknowledged that regional privacy regimes (e.g., Europe’s GDPR) constrain certain age‑verification techniques; OpenAI works with local regulators to balance cultural norms and technical feasibility.
3.3 Joanna Shields (RAIIF)
- Post‑Harm Paradigm Insufficient – Traditional regulation (reactive after damage) doesn’t work for AI because the technology is proactive and pervasive.
- Safety‑by‑Design via Consultation – Highlighted that AI developers are more receptive to embed safety from the start, citing OpenAI’s recent age‑gate rollout.
- Privacy‑Preserving Age Assurance – Emphasised the need for cryptographic or biometric methods that respect privacy while ensuring age‑appropriate experiences.
3.4 Maria Bielikova (Kempelen Institute)
- Profiling Risks – Demonstrated that AI systems can profile minors on platforms (e.g., TikTok) using subtle behavioural cues, leading to targeted content even when “formal” ads are absent.
- Research Methodology – Conducted a bot‑based study with synthetic users (16‑year‑olds and adults) across topics (beauty, fitness, gaming) on TikTok in Germany; findings revealed significant differences in content exposure based on age.
- Need for Behavioural Studies – Argues that transparent, impact‑focused research must precede deployment, feeding into query‑mechanism and redress frameworks.
4. Theme 2 – Risks, Harms, and Accountability
4.1 UNICEF Perspective (Thomas Davin)
- Learning Gap – Cited UNICEF data: 7 out of 10 children cannot explain a text they read at age ten, indicating massive potential for AI‑driven learning gains.
- Over‑Dependency Threat – Warned that excessive reliance on AI could stifle curiosity, creativity, and critical thinking, potentially causing a “one‑size‑fits‑all” pedagogical approach.
4.2 Cross‑Panel Discussion on Governance
-
Operationalising Principles – Panelists agreed on the need to translate high‑level values (safety, transparency, accountability) into measurable, enforceable standards across jurisdictions.
-
Age‑Verification as a Core Pillar – OpenAI’s Open Age Alliance proposes a universal age‑key that travels with the child across services, enabling consistent age‑appropriate experiences.
-
Cultural Diversity vs. Monoculture – Both Joanna Shields and Maria Bielikova stressed that importing only Global‑North models risks erasing local languages, dialects, and cultural contexts. Inclusive design must safeguard cultural plurality while maintaining safety.
-
Redress Mechanisms – Discussed the lack of clear remediation pathways when AI‑mediated harms occur (e.g., mis‑information, manipulative content). Suggested a tri‑level approach: (1) real‑time alerts to parents/guardians, (2) platform‑level content moderation, (3) external oversight bodies.
5. Theme 3 – Practical Recommendations
| Recommendation | Proponent | Key Points |
|---|---|---|
| Age‑Appropriate Design by Default | Joanna Shields, OpenAI | Embed privacy‑preserving age gates; treat age‑verification as a non‑negotiable baseline. |
| Teacher‑Centric Toolkits | Tom Hall | Provide ready‑made lesson plans, real‑world case studies, and professional development for AI integration. |
| Child‑Led Governance | Rahul, UNICEF | Involve children directly in policy‑making, testing, and feedback loops. |
| Cross‑Cultural Model Diversity | Maria Bielikova, Joanna Shields | Support development of regional AI models reflecting local languages and cultural values. |
| Robust Redress & Audit | OpenAI, UNICEF | External reviews (e.g., attorney‑general audits), transparent reporting, and clear escalation pathways for harms. |
| Offline/Low‑Connectivity Solutions | UNICEF (Thomas Davin) | Design AI tools that function offline to reach unconnected or rural learners. |
| Balanced Curiosity‑Stimulating AI | Rahul, Thomas Davin | Design AI that occasionally challenges the child, avoiding instant correct answers that could erode grit. |
6. Closing Reflections
- Rahul’s Final Remarks – Re‑emphasised that children must be central in AI design, not an afterthought. He thanked the panel for listening to youth voices.
- Moderator Summary (Urvashi Aneja) – Highlighted three actionable directions: (1) safety‑by‑design, (2) inclusion‑by‑default, and (3) child‑centric governance.
- Acknowledgement of UN Envoy – Noted that Amandeep Singh Gill could not attend due to logistical issues, underscoring the real‑world challenges of coordinating global stakeholders.
The session concluded with applause and an invitation for the audience to exit.
Key Takeaways
- AI literacy must be hands‑on: teaching children how AI works (data, bias, model limits) is more important than merely showing them what it can do.
- Safety‑by‑design is non‑negotiable: age‑verification, robust parental controls, and prohibition of targeted ads to minors should be built into AI products from the outset.
- Current teacher readiness is low: although most educators are enthusiastic about AI, <50 % feel prepared to teach it, signalling an urgent need for professional‑development resources.
- Cultural diversity matters: relying on a single set of global models risks creating a monoculture that erases local languages and values; regional model development is essential.
- Child agency vs. over‑dependence: AI can empower agency, but unchecked dependence may diminish curiosity and grit; design should intentionally include productive struggle.
- Real‑world evaluation required: lab‑only testing is insufficient; continuous, context‑specific impact assessments must accompany deployment.
- Redress mechanisms are critical: clear, enforceable pathways for reporting and correcting AI‑mediated harms must be established and audited.
- Inclusive access: solutions need to work offline and be accessible to children in low‑resource settings to avoid widening the digital divide.
- Children as co‑creators: involving minors directly in governance, testing, and feedback loops yields more relevant, humane AI systems.
- Global standards with local flexibility: a universal age‑key (e.g., via the Open Age Alliance) can provide consistency, while allowing jurisdictions to tailor cultural safeguards.
These points capture the consensus and tensions expressed during the panel, offering a roadmap for policymakers, industry leaders, educators, and civil‑society actors to shape AI that genuinely safeguards and nurtures the next generation.
See Also:
- safeguarding-children-in-indias-ai-future-towards-child-centric-ai-policy-and-governance
- applying-ai-how-ai-innovators-and-governments-reshape-the-public-sector
- peace-power-and-perspectives-a-value-based-approach-to-trustworthy-ai
- safe-trusted-ai-at-scale
- safe-and-trusted-agentic-ai-building-accountability-and-inclusion-for-india-and-the-global-south
- ai-for-democracy-reimagining-governance-in-the-age-of-intelligence
- trusted-ai-at-scale-a-global-south-leadership-dialogue
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- shaping-the-ai-narrative-trust-outcomes-and-responsibility
- international-ai-safety-coordination-what-policymakers-need-to-know