Reducing AI-Driven Digital Harms for Minors: Platform Accountability, Age Assurance, and Safety-by-Design

Abstract

The panel examined how generative AI is magnifying existing digital harms to children—misinformation, harassment, privacy loss, addictive design, and mental‑health risks—and explored emerging global policy approaches. Panelists reviewed recent regulatory shifts (e.g., duty‑of‑care mandates, age‑verification schemes, AI‑by‑design standards), shared insights from research with youth, and debated the balance between innovation, education, and safety. The conversation also touched on practical steps for schools, governments, platforms, and families, and concluded with audience questions about concrete actions and future governance.

Detailed Summary

1.1 Introduction by the Moderator (Kelly Forbes)

  • Kelly framed the session as a rapid‐evolving intersection of AI and online safety, noting the AI Asia Pacific Institute’s prior work with NetSafe on misinformation and emerging AI‑driven harms.
  • She highlighted the surge of government AI‑in‑education initiatives and the need to understand how regulation affects both schooling and broader child‑online experiences.

1.2 First Speaker – Libby Giles (NetSafe, New Zealand)

  • Historical perspective: NetSafe began two decades ago with “Hector’s World,” an on‑screen dolphin that warned children about risky content.
  • Current challenge: AI‑generated misinformation and deepfakes have accelerated the scale of harm.
  • Approach: Emphasises practical, multi‑stakeholder solutions rather than one‑size‑fits‑all; acknowledges ongoing debates over voluntary versus statutory child‑safety commitments.

2. Global Regulatory Landscape

2.1 Abhineet Kaul (Access Partnership)

  • Shift from voluntary to statutory duty‑of‑care: Countries are moving from soft pledges to binding obligations for platforms.
  • Converging pillars: Rights‑based foundations, safety‑by‑design, age‑assurance, and content classification.
  • Two regulatory extremes:
    • Age‑based restrictions (e.g., bans on under‑16 access).
    • System‑wide controls (e.g., content‑type classification, “minor‑mode” filters).
  • Key point: Regulation must be shared‑responsibility: platforms, governments, parents, and educators all play a part.

2.2 Niki Natrajan (Youth & Media Researcher)

  • Youth‑centric framing: Emphasised that young people’s voices must shape policy.
  • Developmental lens: Children’s cognitive “brakes” are still forming; AI‑driven recommendation engines can overwhelm them.
  • Terminology: Introduced the concept of an “ambient AI media environment” – the invisible AI layer that powers feeds, ads, and personalization.
  • Risk observation: Children are asked to make complex trust decisions (e.g., distinguishing deepfakes) without adequate critical‑thinking tools.

2.3 Aleksandra Chmielewska (UNICEF/GIGA)

  • Digital divide reminder: Over 30 % of the global population still lacks internet access; any AI‑safety agenda must first address connectivity inequities.
  • Youth participation: UNICEF’s recent consultation of ≈50,000 children in 180 + countries produced a youth‑authored AI‑safety statement.
  • Evidence of harm: Cited leaked research from Meta showing 1 in 3 girls on Instagram experience negative body‑image effects; AI now amplifies such harms.

2.4 Libby Giles (Re‑engaged) – Spain’s Proposal

  • Spain (leading EU country) proposes:
    • Age threshold of 16 for social‑media access (mirroring Australia).
    • Ban on infinite‑scrolling and other “addictive” UI features.
  • Regulatory nuance: Calls for enforcement beyond merely passing laws; references the EU AI Act (adopted Feb 2023) as a nascent but promising framework.

2.5 Niki NatrajanNew Zealand Perspective

  • NZ is monitoring Australian and European reforms.
  • Proposed bill: Ban social media for under‑16s; NetSafe opposes outright bans, arguing they “push harms underground” rather than mitigate them.
  • Shared accountability: Stressed that trust‑building across sectors is essential; regulation alone is insufficient.

3. AI in Education – Opportunities & Risks

3.1 Kelly Forbes – Youth‑Centric Findings

  • Survey of New Jersey students:
    • Red line: Image‑generation AI (e.g., DALL‑E, Midjourney) is widely rejected by teens for creative work.
    • Use cases: Some students (e.g., “Frank”, 16) employ AI for routine calculations to save time.
  • Key insight: Children want transparency about platform incentives and data monetisation.

3.2 Aleksandra Chmielewska – Institutional Response

  • UNICEF and UNESCO are training educators on AI literacy, bias, and hallucination detection.
  • Policy note: While children should be consulted, regulators and platforms retain ultimate accountability for safe design.

3.3 Libby Giles – Guardrails for Schools

  • Four pillars for AI‑enabled classrooms:
    1. Strict data boundaries – define permissible uses.
    2. Impact assessments before deployment.
    3. Governance & procurement checks to avoid vendor lock‑in.
    4. Prohibit surveillance‑type AI – AI must not be used to continuously profile or track students.

3.4 Abhineet Kaul – Lifelong‑Learning & Digital Citizenship

  • Promotes a “digital global citizenship” framework: every person (child, parent, senior) identifies as a citizen of the digital ecosystem with corresponding responsibilities.
  • Highlights the need for continuous education beyond primary school, matching the rapid AI‑skill turnover in the labour market.

4. Broader Systemic Issues

4.1 Regulation vs. Platform Influence

  • Panel consensus: Regulations must have “teeth” (e.g., substantive fines). The Meta GDPR fine (€2.7 bn) illustrates the market impact of enforcement.
  • Big‑tech lobbying: Large platforms can obfuscate public understanding and dilute policy effectiveness.

4.2 Socio‑Technical Gaps

  • Voice‑enabled AI for offline populations (India’s AI Mission): Targeting the 500 million “off‑grid” users with voice interfaces to bridge access gaps.
  • Mental‑health service strain: Youth are already turning to AI chatbots (e.g., ChatGPT) for mental‑health advice, underscoring the need for robust social‑service integration.

5. Audience Q&A – Concrete Recommendations

Question (summarised)Respondent(s)Key Recommendation
Local liaison for districts (India)Panel (general)Deploy voice‑enabled community liaisons to educate households on AI risks; aligns with India’s AI‑mission “voice‑first” outreach.
Effectiveness of regulation (Australia‑EU comparison)Audience comment (Australia) & LibbyPublic‑consultation is essential; regulations must be clear, enforceable, and complemented by community education.
Academic integrity & AI use in schoolsAleksandra & KellyIntegrate AI‑aware curricula that teach critical evaluation of AI‑generated content; encourage tools that prompt dialogue rather than give direct answers.
Design‑vs‑Ban debateNiki & LibbyFavor safety‑by‑design (e.g., UI throttles, age‑verification without invasive digital IDs) over blanket bans; balance privacy with accountability.
Platform responsibility vs. parental roleAbhineet & KellyPlatforms should bear the major share of responsibility for recommendation algorithms; parents/educators provide resilience‑building literacy.

6. Closing Remarks

  • Libby Giles: Emphasised that education of all stakeholders—parents, teachers, policymakers—is the most powerful tool against AI‑driven harms.
  • Kelly Forbes: Stressed the need to shift from “technology‑vs‑society” framing to a co‑design approach where children are active participants.
  • Abhineet Kaul: Highlighted the importance of testing regulatory models (Australia as a “testbed”) and ensuring business models do not profit from child exploitation.
  • Aleksandra Chmielewska: Reiterated that global collaboration (UNICEF, UNESCO, GIGA) is key to scaling safe AI education.
  • Niki Natrajan: Concluded with a call for ongoing dialogue that places developmental science at the centre of policy design.

Key Takeaways

  • Regulatory momentum: Countries are moving from voluntary child‑safety pledges to binding duty‑of‑care obligations, with converging pillars such as safety‑by‑design and age‑assurance.
  • Age‑based restrictions are controversial: Bans on under‑16 social‑media access (Australia, proposed in NZ, Spain) risk pushing harmful behavior “offline” and may lack enforcement capacity.
  • Ambient AI environment: Children interact with a behind‑the‑scenes AI layer that personalises feeds and can overwhelm developing decision‑making processes.
  • Youth participation is essential: UNICEF‑led consultations of >50 k children demonstrate that child‑authored policy statements improve relevance and legitimacy.
  • Education & digital citizenship: Effective mitigation requires curricula that teach AI literacy, critical thinking, and data‑rights awareness for all age groups.
  • Design over bans: Safety‑by‑design (e.g., disabling infinite scrolling, transparent recommendation logic) is preferred to outright bans, which may be technically infeasible and socially disruptive.
  • Data‑use limits in schools: Schools should enforce strict data‑boundary policies, conduct pre‑deployment impact assessments, and prohibit surveillance‑type AI.
  • Enforcement matters: Heavy fines (e.g., Meta’s GDPR penalty) illustrate that robust, enforceable regulations are necessary to deter profit‑driven harms.
  • Digital divide cannot be ignored: Over 30 % of the world still lacks internet; any AI‑safety framework must first ensure equitable access before imposing advanced safeguards.
  • Community‑level outreach: Proposals such as district‑level AI liaison officers (particularly in low‑resource settings like India) can bridge gaps between technology, policy, and households.

See Also: