Peace, Power and Perspectives: A Value-Based Approach to Trustworthy AI
Abstract
The session examined how sovereign, value‑based AI can bolster national security, public‑service delivery and industry resilience while preserving democratic trust. After a geopolitical framing of AI as both a technical and foreign‑policy instrument, Norway showcased its six‑pillar public‑sector AI framework and industry use‑cases in maritime automation. A research‑driven perspective highlighted the environmental and governance dimensions of trustworthy AI. Norway’s minister outlined a national AI strategy (AI Norway, funding of research centres, EU AI Act transposition) and stressed the need for international cooperation. A diverse panel from India, Brazil, Japan and Norway then offered concise definitions of trustworthy AI, discussed child‑online protection, digital serenity, and the alignment of AI policy with inclusive, sustainable growth.
Detailed Summary
The session opened with a concise address (delivered by Niels Nagelhus Schia) that set the tone for the entire discussion.
-
Key Insight: Trustworthy AI cannot be a purely national endeavour; it is embedded in the geopolitics of technology, supply‑chain control, and democratic stability.
-
Three Dynamics Identified:
- Technology as structural power – actors that own foundational models, cloud infrastructure or semiconductor fabs shape global rules and, consequently, trust.
- Concentrated AI supply chains – a handful of chip makers, model trainers and mineral‑source countries create strategic dependencies that become trust vulnerabilities.
- AI’s impact on democracy – algorithmic amplification, misinformation, and election influence operate across borders, making isolated regulation ineffective.
-
Three Layers of Trust Proposed:
- Technical trust (safety, reliability, explainability)
- Institutional trust (legitimate, accountable governance)
- Geopolitical trust (state cooperation, avoidance of weaponisation)
-
Call to Action: International standards, transparency mechanisms, dual‑use agreements, and inclusive participation of developing nations are essential to prevent a fragmented, “zero‑sum” AI race.
2. Norway’s Public‑Sector AI Framework – Heather Broomfield
Heather Broomfield (Director‑General, NDDA) presented a six‑element, non‑prescriptive framework that guides trustworthy AI adoption in Norway’s public administration.
| Element | Core Message |
|---|---|
| 1. Legitimate & Trusted Institutions | Norway’s high institutional trust is a non‑negotiable foundation; loss of trust would be catastrophic. |
| 2. Shared Public Values | AI must embody equality, fairness and accountability; bias and power asymmetries are openly acknowledged. |
| 3. Robust Governance | A blend of longstanding national legislation, forthcoming EU AI Act transposition, and soft‑law tools (standards, guidelines). Child‑online safety was highlighted as a top governance priority. |
| 4. High‑Quality Public Data | Norway’s longstanding data‑collection tradition underpins welfare services; data is treated as a societal asset and is made available both openly and via controlled access. |
| 5. Informed, Digitally Mature Population | Citizens expect transparency, explainability and fairness; digital literacy is high, mirroring India’s maturity. |
| 6. Targeted Research | Multidisciplinary research (e.g., the Trust Research Centre) is essential to address AI’s unpredictable risks. |
-
Illustrative Example: The creation of a government AI unit within the NDDA to accelerate responsible AI development.
-
Takeaway: Norway is building a systemic, inter‑dependent architecture; failure of any element weakens the whole.
3. Industry Perspective – Hildegun McLernan (Kongsberg Maritime)
Hildegun McLernan (SVP, Technology Office, Kongsberg Maritime) shifted focus to AI in the maritime sector, emphasizing parallels with public‑sector challenges.
-
Scale of Impact: Kongsberg’s equipment operates on 30,000 vessels, representing ≈ ⅓ of the global maritime fleet—a massive source of real‑time data.
-
Key Industry Drivers:
- Growth of waterborne trade (80‑90 % of global freight) and its importance for sustainability.
- Labor shortage and the need for autonomous/remote‑support vessels.
- Decarbonisation goals (net‑zero by 2055) and the role of digitalisation as an enabler.
- Cyber‑security & regulatory fragmentation across jurisdictions.
-
Future Vision: All vessels will be permanently connected, enabling over‑the‑air upgrades, predictive maintenance and fleet‑level optimisation.
-
Concrete AI Use‑Case – Hydrodynamics:
- Sensors on propellers generate real‑time performance data.
- AI models detect patterns to reduce underwater radiated noise (protecting marine biodiversity), optimise acceleration, and predict wear‑and‑tear to avoid failures.
-
Human‑Centred Trust: Emphasis on bridging the gap between AI outputs and human expectations; human operators must understand and trust the system.
-
Research Collaboration: Kongsberg partners with the Norwegian Centre for Trustworthy AI (led by Prof. Martin Dæhlen) to ensure robust, secure solutions.
4. Trust Research – Morten Dæhlen
Morten Dæhlen (University of Oslo, Trust Research Centre) delivered a concise, philosophical yet practical view on trustworthiness as an emergent property of scale.
-
Six Pillars of Trustworthy AI (re‑iterated from earlier speakers):
- Accuracy / Precision – measurable performance.
- Explainability – insight into algorithmic “hood”.
- Fairness & Inclusion – protection of autonomy and avoidance of bias.
- Safety & Security – privacy and resilience.
- Sustainability – “green AI” for the climate transition.
- Good Governance – multi‑level oversight (international, national, organisational, individual).
-
Scaling Challenge: Trust must survive mass deployment (“when we scale it”).
-
Environmental Footprint of AI:
- AI is resource‑intensive (critical minerals, water‑heavy data centres, high energy consumption).
- Pathways to Reduce Impact:
- Shift to smaller, edge‑centric models.
- Neuromorphic and optical computing to improve efficiency.
- Data reduction / smarter data sharing to curtail storage bloat.
- Governance pathways (ESG frameworks that embed AI awareness).
-
Metrics & KPIs: Development of performance indicators that monitor trade‑offs (e.g., Earth‑health vs. user‑experience).
-
Policy Suggestion: Taxation of digital resources to fund sustainable AI research and mitigate environmental externalities.
-
Innovation Highlight: A cross‑disciplinary project combining climate and health data to forecast climate‑sensitive diseases; a seminar in Kigali, Rwanda will be launched in two‑and‑a‑half days to disseminate findings.
-
Open Question Posed: “Can AI help decouple economic growth from over‑use of Earth’s resources?” – left for audience reflection.
5. Norway’s National AI Strategy – Minister Karianne Oldernes Tung
Minister Tung delivered a policy‑level address summarising Norway’s concrete actions and vision.
-
Regulatory Alignment:
- Implementation of the EU AI Act into Norwegian law (ongoing).
- National foundation and language models to be offered free of charge to domestic actors.
-
Institutional Initiative – “KI Norge” (AI Norway):
- A forthcoming body (name not final) to provide national guidance on responsible AI and act as a trusted coordination hub.
-
Funding Commitment: > 1 billion NOK (≈ US $100 million) earmarked for six AI research centres, including the Trust Research Centre.
-
International Cooperation:
- Active participation in UN, OECD, GPI, and the Hiroshima Process (first global responsible‑AI framework).
- Advocacy for an international implementation framework and knowledge‑sharing across borders.
-
Digital Serenity & Agency:
- Digital serenity = multilayered resilience (regulation, critical‑mineral supply, data‑centre capacity, connectivity, hardware).
- Emphasis on collaboration with like‑minded nations to maintain control and alternative pathways in the face of geopolitical shocks (e.g., cyber‑attacks, supply‑chain disruptions).
-
India‑Norway Collaboration: Cited EFTA‑India trade agreement as a platform for joint work on green transition, maritime industry, health‑tech and AI.
-
Closing Note: Trustworthy AI emerges from global partnership, not unilateral national effort.
6. Panel Discussion – Global Perspectives
A lightning‑round was followed by a short Q&A, featuring representatives from Brazil, India, Japan, Norway and the broader audience.
6.1 Lightning‑Round Definitions of Trustworthy AI
| Speaker | Core Elements Mentioned |
|---|---|
| Minister Tung (Norway) | “Technology must be trusted; society, businesses, politicians and friends must also be trusted.” |
| Yoshimasa Uno (Japan) | AI‑governance = balance of innovation & risk‑management; “AI Basic Plan” aims to make Japan the most AI‑friendly nation. |
| Mala Wang‑Naveen (Norway) | Five pillars: accuracy, reliability, transparency, fairness, accountability. |
| Sindhu Gangadharan (India) | Need understanding of how AI works and agency (choice to opt‑out). |
| Luana Ronkarate (Brazil) | Trustworthy AI must be inclusive, sustainable, respect human rights and national jurisdictions. |
6.2 Protecting Children & Youth (Minister Tung)
- Problem Statement: Children lack digital guardians; platforms expose them to bullying, pressure, and even self‑harm.
- Policy Measures:
- Proposing an age‑limit of 15 years for social‑media access (aligned with EU partners).
- White paper in Norwegian Parliament targeting dark‑pattern regulation, algorithmic transparency, and data‑privacy safeguards.
- Emphasis on digital competence for teachers and adults to teach critical thinking.
6.3 Japan’s AI Basic Plan (Yoshimasa Uno)
- Strategic Drivers: AI’s role in industrial competitiveness and national security; demographic challenges (low birth‑rate, ageing).
- Risk Landscape: Technical (hallucinations, mis‑judgment); social (bias, discrimination, privacy violations); security (cyber‑attacks).
- Policy Goal: Human‑centred AI that respects dignity, balances innovation with risk mitigation, and positions Japan as the most AI‑friendly country.
6.4 Technology Humanism – Sindhu Gangadharan
- Key Message: Technology should serve people, not the other way round.
- Illustrative Example: During the COVID‑19 pandemic, SAP‑enabled dashboards tracked oxygen supply chains across India, saving lives.
- Vision: Leverage AI to solve real societal needs (education, healthcare, agriculture) while ensuring inclusivity.
6.5 Indian AI Policy Values – Amlan Mohanty
- Optimistic Outlook: India’s AI governance aims at expansive, enabling regulation rather than restrictive containment.
- Core Values: Inclusivity, equality, access – embedded in the Constitution and reflected in OECD, G20 and UN discussions.
- Implementation Approach: Light‑touch, agile regulation that translates constitutional rights into AI‑specific guidelines for all stakeholders.
6.6 Brazil’s National AI Plan – Luana Ronkarate
- Values Emphasised: Responsibility, ethics, transparency, inclusion, sovereignty.
- Strategic Aim: Use AI to personalise and proactively deliver public services while safeguarding human rights and democratic values.
6.7 Digital Serenity – Minister Tung (Follow‑up)
- Explained digital serenity as a holistic capability: regulatory, infrastructural, and collaborative mechanisms that give Norway agency amid global digital volatility.
6.8 Closing Reflections (Panel)
- Consensus that trustworthy AI cannot be built by any single nation; it requires intersection of values, power structures and diverse perspectives.
- Audience directed to a QR code for further resources on trustworthy AI.
7. Final Announcements & Closing
- Upcoming Seminar: A Kigali, Rwanda event (in 2 ½ days) on integrating climate and health data to predict climate‑sensitive diseases.
- Contact Information: QR code displayed for the Trust Research Centre’s website and further reading.
Key Takeaways
- Geopolitics of AI: Trustworthy AI is inseparable from global power dynamics, supply‑chain concentration, and democratic resilience.
- Three‑Layer Trust Model: Technical, institutional, and geopolitical trust must be cultivated together.
- Norway’s Six‑Pillar Framework: Institutional legitimacy, shared values, robust governance, high‑quality data, digitally mature citizens, and targeted research form a systemic approach.
- Maritime Industry Example: Real‑time sensor data + AI can improve propulsion efficiency, reduce underwater noise, and enable remote/autonomous vessels—yet human trust remains essential.
- Research‑Driven Sustainability: Scaling AI demands smaller models, neuromorphic/optical computing, data reduction, and clear ESG KPIs; taxation of digital resources could fund greener AI.
- National AI Strategy (Norway): AI Norway (KI Norge), EU AI Act transposition, free foundational models, and a NOK 1 bn research fund signal decisive state action.
- Child‑Online Protection: Age‑limit proposal (15 years) and a parliamentary white paper underscore a proactive stance on digital child safety.
- International Cooperation is Mandatory: All panelists highlighted that no single country can secure trustworthy AI; shared standards, cross‑border governance and inclusive participation are imperative.
- Values Across Nations: Inclusivity, fairness, sustainability, human rights, and sovereignty are recurring themes in India, Brazil, Japan and Norway’s AI policies.
- Practical Path Forward: Build interoperable standards, transparent supply‑chains, human‑centred design, and multidisciplinary research—while keeping an eye on environmental impact and societal wellbeing.
See Also:
- thriving-with-ai-human-potential-skills-and-opportunity
- trustworthy-ai-investments-capital-allocations-as-ai-governance
- welfare-for-all-ensuring-equitable-ai-growth-across-the-worlds-largest-and-oldest-democracies
- ai-for-democracy-reimagining-governance-in-the-age-of-intelligence
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- ai-and-children-turning-principles-into-practice-for-safe-inclusive-and-empowering-ai
- building-resilient-sustainable-ai-infrastructure-for-people-planet-and-progress