Trusted AI: Practical Pathways for the Public Sector
Abstract
The session explored how public‑sector organisations can move from high‑level AI‑trust principles to concrete, scalable practices. Anusha Dandapani opened with a framework that treats trust as a design‑time requirement, outlining four pillars – governance, technical safeguards, human accountability and continuous oversight. The UNICC “AI Hub” was presented as a shared experimentation platform that emphasises open‑source models, reusable components, and human‑in‑the‑loop safeguards such as the UN‑wide HR bot “Unify HR”. The discussion then shifted to sector‑specific implementation, with Dr Manish Pant describing value‑driven, outcome‑oriented AI in health, and Robin Scott exposing pervasive AI‑literacy gaps and the need for hyper‑personalised training. Throughout, panelists stressed organisational readiness, change‑management, and the crucial role of leadership in embedding responsible AI at scale. Audience questions raised the need for sector‑specific policies beyond health, illustrating the challenges of translating AI assurance into practice across India’s diverse public‑sector landscape.
Detailed Summary
Anusha Dandapani set the tone by stating that AI is now a core component of public digital infrastructure – from service delivery to strategic decision‑making. Consequently, trust must be baked in from the outset; it cannot be an after‑thought layer.
Four inter‑related pillars were identified:
- Governance mechanisms – clear authority over model selection, data set choice, and training processes.
- Technical safeguards – mechanisms to detect and mitigate erroneous outputs or system failures.
- Human accountability – explicit responsibility for AI‑generated results, avoiding “no‑one‑owns‑the‑mistake” scenarios.
- Ongoing oversight – continuous monitoring, bias‑checking, data refreshes, and cost/people planning to sustain systems over the long term.
Dandapani highlighted that many UN agencies are at different maturity stages – from experimentation to production – but all must embed these pillars to achieve “responsible AI”.
2. UNICC AI Hub – Shared Platforms and Open‑Source Strategy
The conversation moved to the AI Hub that UNICC operates:
- Shared experimentation platforms – common sandbox environments where UN entities can trial AI solutions under uniform security and guardrail policies, improving value‑for‑money.
- Guidance & governance services – the Hub produces policy templates, risk‑assessment tools, and best‑practice checklists.
- Open‑source model curation – UNICC deliberately aggregates models from India, Europe, the Middle East and the United States, ensuring a broad palette of options and avoiding vendor lock‑in.
- Human‑in‑the‑loop safeguards – for sensitive applications, a designated person validates AI outputs before they reach end‑users.
A concrete illustration was the “Unify HR” bot, initially built for 13 UN organisations and now serving roughly 15. The bot demonstrates the need to manage multilingual data, secure personal records, and embed feedback loops for continuous improvement.
3. Organizational Readiness & AI Literacy
Dandapani emphasized that technology alone does not guarantee adoption; the people‑side is equally critical.
- AI literacy programmes – UNICC runs capacity‑building workshops to ensure officials understand what AI does (pattern recognition, not reasoning) and can flag anomalous outputs.
- Change‑management capacity – structured processes that guide users from pilot to production, ensuring clear accountability structures.
- Feedback mechanisms – channels for end‑users to report “fit‑for‑purpose” concerns, enabling the Hub to iterate on models and safeguards.
She noted that a previous “JNA AI” rush demonstrated the danger of deploying without adequate human readiness.
4. Panel Introduction
After the opening, the moderator introduced the panel:
- Dr Manish Pant – UNDP policy specialist for digital health.
- Robin Scott – CEO & co‑founder of Apolitical, partner with UNICC on AI‑literacy training.
(Other panelists – Katharina Frey, Sameer Chauhan, and Robyn Scott – contributed throughout the discussion.)
5. Health‑Sector Lens – Value‑Based, Safe AI
5.1 Defining Value
Manish Pant argued that AI’s value in health must be outcome‑oriented, not technology‑oriented. Key value dimensions cited:
- Service efficiency – e.g., reduced stock‑outs, improved supply‑chain visibility, better treatment adherence.
- Clinical safety – AI should never replace clinician judgment where risk is high; human‑in‑the‑loop design is mandatory.
- Economic return – demonstrable return‑on‑investment from AI pilots (e.g., COVID‑response analytics) helps justify scaling in resource‑constrained settings.
5.2 Governance & Technical Safeguards
- Standard Operating Procedures (SOPs) – embed AI steps into existing health‑system workflows, rather than treating AI as a separate silo.
- Data quality & interoperability – fragmented, siloed health data hampers model performance; investment in digital public infrastructure (registries, standards) is essential.
- Human‑in‑the‑loop & validation – all AI outputs must be reviewed by qualified health professionals before acting on them.
5.3 Skills & Workforce Enablement
- Continuous mentorship – one‑off trainings are insufficient; a sustained mentorship model builds confidence among doctors, nurses and community health workers.
- Augmentation, not replacement – framing AI as a tool that enhances rather than displaces health staff mitigates resistance.
5.4 Policy & Regulatory Foundations
- Health‑specific AI models – co‑creation between technologists and clinicians to produce explainable, domain‑specific algorithms.
- Regulatory oversight – analogous to clinical trials for software‑as‑a‑medical‑device; health ministries need assessment frameworks and capacity‑building for regulators.
5.5 Bench‑marking & Risk Stratification
- Risk tiers – low‑risk chatbots (appointment scheduling) can be deployed early; high‑risk triage or diagnostic tools require stricter validation and governance.
6. AI Literacy & Readiness – Insights from Robin Scott
6.1 Current Gaps
- Ethical‑framework awareness – only 26 % of AI implementers in government are familiar with their own nation’s AI ethics policies.
- Data‑readiness – while 70 % of leaders claim to have AI pilots, just 36 % have concrete plans to make organisational data AI‑ready.
- Skill clarity – merely 20 % of public servants can articulate the competencies they need for responsible AI use.
6.2 Hyper‑Personalised Training
Scott advocated moving from a “one‑size‑few‑personas” model to role‑specific, algorithm‑driven learning pathways, recognizing that every public‑sector employee has a distinct workflow. The goal is to deliver just‑in‑time, contextualised modules that align with day‑to‑day tasks.
6.3 Cultural Dimensions
- Automation vs. Augmentation – public servants are optimistic (≈90 %), yet fear job loss; messaging must stress AI as a productivity multiplier that expands capacity (e.g., “making every health worker 2× effective”).
- Agency – AI should not monopolise decision‑making; humans must retain agency to intervene, correct, and audit AI actions.
- Leadership modeling – senior officials need to personally engage with AI tools; the anecdote of India’s Prime Minister asking which AI courses he should take underscores the importance of top‑down commitment.
7. Cross‑Sector Concerns & Audience Question
An audience member from India’s public sector raised two points:
- Policy breadth – the current discussion focused on health, but AI governance is required across power, steel, fertilizer, oil & gas, infrastructure, and defence. A single, monolithic AI policy is insufficient; sector‑specific frameworks are needed.
- Ground‑level implementation – citing a recent sterilisation incident in Madhya Pradesh where no AI or staff were present, the participant questioned how AI can reach the frontline in such critical moments.
The moderator acknowledged the concerns, promised to bring them to future sessions (e.g., at Bharat Mandapam), and noted that the panel’s insights on literacy, governance, and organisational readiness are directly applicable to these broader domains.
8. Closing Remarks
Anusha Dandapani thanked the panelists and audience, underscoring that responsible AI is a collective, multi‑stakeholder endeavour that hinges on trust, governance, human accountability, and continuous oversight. She reiterated the importance of scaling trustworthy AI from pilot to production to global impact, especially in the Global South where India’s digital public infrastructure serves as a model for inclusive, people‑centric AI.
Key Takeaways
- Trust must be embedded at design‑time; it cannot be retrofitted after deployment.
- Four core pillars underpin responsible AI in the public sector: governance, technical safeguards, human accountability, and ongoing oversight.
- The UNICC AI Hub provides shared, open‑source‑first experimentation environments, promoting reuse and cost‑efficiency across the UN system.
- Human‑in‑the‑loop mechanisms and clear accountability structures are non‑negotiable for sensitive applications (e.g., HR data, health diagnostics).
- Organisational readiness—including AI literacy, change‑management, and feedback loops—is as important as the technology itself.
- In health, AI value should be measured by service outcomes, safety, and economic return, not by the sophistication of the model.
- Data interoperability and quality are prerequisites; fragmented health data severely limits AI effectiveness.
- Sector‑specific AI policies are essential; a single universal policy cannot address the diverse operational realities of power, infrastructure, defence, etc.
- AI‑literacy gaps are stark: only about a quarter of implementers know their government’s ethical frameworks, and less than one‑third have data‑readiness plans.
- Hyper‑personalised, role‑based training is the next frontier for capacity‑building in the public sector.
- Leadership engagement (e.g., senior officials taking AI courses) dramatically accelerates cultural acceptance and responsible adoption.
These points distill the session’s practical roadmap for moving from AI principles to trusted, scalable implementation across the public sector.
See Also:
- scaling-trusted-ai-for-8-billion
- ai-impact-forum-democratising-ai-resources
- ai-innovators-exchange-accelerating-innovation-through-startup-and-industry-synergy
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- ai-for-indias-next-billion-intergenerational-insights-for-inclusive-and-future-ready-growth
- ai-and-children-turning-principles-into-practice-for-safe-inclusive-and-empowering-ai