Toward Collective Action: A Roundtable on Safe and Trusted AI

Abstract

The roundtable convened African policymakers, researchers, civil‑society actors and innovation leaders to define what “safe and trusted AI” means in the African context. Panelists examined undesirable AI outcomes—dependency, digital neocolonialism, misinformation, and existential cultural threats—and identified short‑ and long‑term risks, including the rise of AI‑driven disinformation agents. They highlighted gaps in AI incident reporting, the scarcity of African‑specific policy frameworks, and the urgent need for capacity‑building programmes. The discussion converged on three cross‑cutting themes: empowerment through agency, collaborative infrastructure (e.g., the African Computer Initiative), and pragmatic governance (transparent procurement, human‑in‑the‑loop designs). Audience questions probed transparency, the feasibility of AI‑generated media watermarks, and strategies to prevent AI from widening the digital divide. The session concluded with concrete calls for collective action, inclusive policy development, and regional cooperation.

Detailed Summary

  • The moderator opened the roundtable, acknowledging the research team members (including Marie Iradukunda, Gattoni, and Michelle Malonza) and the hosting organisation AI Safety South Africa.
  • Participants were invited to register on the NUMA platform, scan QR codes for contact details, and use Slido for submitting questions.
  • The agenda: ~30 min panel discussion, ~15 min panel‑led dialogue, followed by brief concluding remarks.

2. What Does “Undesirable AI” Look Like for Africa?

Amb. Philip Thigo answered the opening question:

  • Dependency vs. Capacity – AI systems that create reliance on external technologies erode human agency, especially in a continent still building its own capabilities.
  • Digital Neocolonialism – Extraction of African data and value capture by foreign firms, leaving local institutions as mere “users”.
  • Cultural & Existential Threats – AI models built without African knowledge, languages, or cultural nuance risk becoming “civilization‑extinction” agents, threatening identity and societal cohesion.

These points set the tone for the rest of the discussion: safety is inseparable from sovereignty, agency, and contextual relevance.

3. Short‑ and Long‑Term Risks

Prof. Jonathan Shock expanded on risk categories:

  • Misinformation vs. Disinformation – Misinformation is unintentional spread of false info; disinformation is targeted, often gender‑biased, political manipulation (e.g., anti‑female‑politician campaigns).
  • Trust Erosion – Ongoing election‑time campaigns across Ghana, South Africa, Nigeria are already leveraging AI‑amplified false narratives, threatening democratic trust.
  • AI‑Agents – Emerging capability for a single malicious actor to create autonomous agents that can orchestrate large‑scale misinformation campaigns, lowering the barrier to entry beyond big‑tech dependence.
  • Long‑term Existential Concerns – While speculative, the panel acknowledged the need to study scenarios where AI systems become uncontrollable (“out‑of‑control” threats).

4. Data Gaps & the Need for an African AI Incident Database

Dr Chinasa Okolo highlighted the lack of continent‑specific incident tracking:

  • Existing global AI‑incident repositories often mis‑classify African entries under “African‑American”.
  • Concrete examples: AI‑graded university exams in Nigeria and South Africa causing grading disputes, which received minimal media attention.
  • Recommendation: Build a dedicated African AI incident database to surface harms, inform regulation, and hold perpetrators accountable.

Follow‑up on Accountability

  • Dr Okolo noted that advocacy pathways common in the U.S. (writing to congresspeople, engaging staff) are uneven across African nations.
  • She advocated for coalition‑building among civil‑society groups to amplify voices, acknowledging the risk of governmental suppression and potential violence against activists.

5. Policy Landscape & Talent Shortage

Mark Gaffley described systemic gaps:

  • Absence of AI‑specific policies – Many African countries have AI strategies but lack concrete policies, regulatory mechanisms, or enforcement capacity.
  • Limited Technical Fluency – Public‑sector officials often conflate AI with chat‑bots (e.g., “GPT”), indicating a basic literacy problem.
  • All‑In Effort Needed – Bridging civil‑society, academia, government, and private sector is essential; the problem is not merely technical but socio‑political.

6. Defining “Existential Risk” for Africa

Dr Okolo reframed existential risk away from sci‑fi scenarios (e.g., AI launching nuclear weapons) toward real, present‑day threats:

  • Democratic Stability – Disinformation that destabilises elections.
  • Social Cohesion – AI‑enabled gender‑based political violence.
  • Cultural Erasure – Models insensitive to African languages, traditions, and local knowledge.

7. Who Should Benefit? – The Three Personas

Mark Gaffley presented a three‑persona framework:

  1. Scientists – Need access to AI models and evaluation tools; African researchers are the primary data generators (e.g., Kenya’s massive ChatGPT usage for emotional advice).
  2. Governments – Must develop capacity to hold multinational AI firms accountable; procurement processes should embed safety benchmarks.
  3. Citizenry – Must be included in design and deployment, ensuring safe environments that prevent manipulation by “agents”.

He argued that building local AI models attuned to African contexts is vital to avoid over‑reliance on external platforms like OpenAI, Anthropic, etc.

8. Capacity‑Building Initiatives

  • Public Awareness Survey (GCG) – Conducted within the South African Social Attitudes Survey; revealed ≈75 % of respondents have “very little” AI knowledge, with learning mainly through informal channels (social media, TV).
  • Short Courses – Accredited university courses on AI ethics and human‑rights, attracting thousands of applicants globally; scholarships target African women (Women in Focus series).
  • Upcoming MOOC – Free, publicly‑available course using relatable caricatures to demystify AI for broader audiences.

These efforts aim to raise AI literacy, enabling Africans to define their own AI preferences.

9. Panel Reflections on Desired AI Outcomes

  • Prof. Shock stressed empowerment & agency: AI should expand decision‑making options, but only if it respects local languages and contexts.
  • Dr Okolo highlighted equitable participation in AI development, noting that underemployment drives youth interest; inclusive governance can turn AI safety work into career pathways.
  • Amb. Thigo emphasized that “existential risk” for Africa translates into threats to democracy and societal harmony, not sci‑fi apocalypse.

All agreed on the need to redefine existential risk in African terms and to focus on practical outcomes that protect agency.

10. Collaboration Infrastructure – African Computer Initiative

Prof. Shock announced the launch of the African Computer Initiative (ACI) at UCT:

  • A high‑performance computing (HPC) cloud with GPUs, made available to researchers continent‑wide.
  • Positioned as a network‑effect rather than competition: shared resources empower institutions that cannot individually afford trillion‑dollar infrastructure.
  • Calls for joint effort among academia, civil society, government, and private sector.

11. Deploying AI in Critical Infrastructure

Multiple panelists addressed governance considerations:

  • Mark Gaffley: “If it ain’t broke, don’t fix it.” AI should be deployed only when it demonstrably solves a problem better than existing solutions.
  • Prof. Shock: Emphasised human‑in‑the‑loop designs and transparent decision logs to preserve reasoning ability.
  • Amb. Thigo: Highlighted the importance of procurement clauses that embed safety benchmarks, audit rights, and data‑localisation requirements.
  • Dr Okolo: Stressed continuous agile mechanisms—policy must evolve quickly because AI technologies change faster than traditional hardware lifecycles.
  • Consensus: Inclusive, transparent, and accountable deployment, with fallback analog systems for those unable to access AI tools.

12. Audience Q&A

12.1 Transparency & Explainability

Question (Prashok): What should an ideal AI model optimise for to improve inclusivity and trust?

  • Panel Response: Build explainable systems; while the internal weight matrices are opaque, developers must supply intelligible decision rationales and user‑facing explanations.

12.2 Misconceptions About AI

Question: What are the biggest misconceptions?

  • Mark Gaffley: AI is not a panacea for development challenges; basic infrastructure (hospitals, teachers, reliable electricity) often yields higher impact.
  • Prof. Shock: Over‑hyped expectations overlook the necessity of domain‑specific, context‑aware solutions.

12.3 Watermarking AI‑Generated Media

Question: Should mandatory watermarks be required?

  • Panel Consensus: Watermarks are a short‑term mitigation; open‑source models can bypass them, and malicious actors will simply avoid watermarking. Hence, watermarks alone cannot solve the problem.

12.4 Preventing the Digital Divide

Question: How to ensure AI does not widen the existing digital exclusion (≈64 % of Africa lacks internet)?

  • Mark Gaffley: Leverage AI to optimise development services (e.g., smart grids) that improve connectivity and electricity, thereby indirectly expanding AI accessibility.
  • Prof. Shock: Use AI to accelerate non‑AI infrastructure (e.g., planning better power distribution).
  • Amb. Thigo: Policy must first address connectivity, electricity, and literacy, then introduce AI as a complementary tool.

12.5 Philosophical “AI‑Chosen Structure for Humanity”

Question: If AI were to decide on an optimal socio‑economic system, what should it be?

  • Prof. Shock (brief): An AI‑chosen structure would be hyper‑efficient, strictly time‑driven—implying a loss of human deliberation and values.

12.6 Policy & Next‑Generation Safety

Question: Should policy be reactive or proactive, given the relatively early AI uptake in Africa?

  • Mark Gaffley: Proactive engagement is vital—open feedback periods on AI strategies, scholarly analysis, and grassroots advocacy can shape policy before harmful deployments.
  • Prof. Shock: Youth must be included; their feedback loops improve policy relevance.

13. Closing Remarks

  • Iman (moderator) thanked the panel, thanked attendees for their participation, and announced a group photo.
  • An informal networking event at Café Lota (7:30 pm) was promoted.

Key Takeaways

  • Agency & Sovereignty First – AI systems that create dependency or enable digital neocolonialism are fundamentally unacceptable for Africa.
  • Misinformation & AI‑Agents – Immediate risks stem from AI‑driven misinformation, especially gender‑targeted political attacks; emerging AI‑agents lower the barrier for coordinated disinformation.
  • Data Incident Gap – There is a critical need for an African‑focused AI incident database to capture harms and inform regulation.
  • Capacity Building Is Essential – Survey data shows ~75 % low AI literacy; GCG’s courses, scholarships, and upcoming MOOC are key levers for increasing knowledge.
  • Three‑Persona Framework – Scientists need model access, governments need regulatory capacity, citizens need inclusion; all three must be coordinated.
  • Local Model Development – Reliance on external APIs (e.g., OpenAI) undermines agency; building African‑centric models is a strategic priority.
  • Collaborative Infrastructure – The African Computer Initiative provides shared HPC resources to democratise AI research across the continent.
  • Governance Principles for Critical Infrastructure – Apply “if it isn’t broken, don’t fix it,” embed safety benchmarks in procurement, enforce human‑in‑the‑loop designs, and maintain analog alternatives.
  • Transparency Over Watermarking – Explainable AI is more effective than mandatory watermarks, which can be circumvented by malicious actors.
  • Digital Divide Must Be Bridged First – AI deployment should follow concrete improvements in connectivity, electricity, and basic services; AI can then accelerate these developments.

These points collectively outline a roadmap for African stakeholders to move from “what we don’t want” to “what we need” in the safe and trusted deployment of AI across the continent.

See Also: