AI in Negotiations, Diplomacy, and Conflict Management

Abstract

The session launched the MOVE 37 initiative – a collaborative research‑and‑development effort that explores how artificial intelligence can be responsibly integrated into high‑stakes diplomatic and negotiation processes. After a brief contextual framing by the Belfer Center team (Charlie Pozniak and Slavina Ancheva), the panel examined three themes: (1) the structural complexity of modern multilateral negotiations; (2) the technical opportunities and limits of AI (including large language models, game‑theoretic tools, and predictive analytics); and (3) practical pathways for responsible deployment. Panelists shared concrete experiences from UNESCO, U.S. intelligence, and a global public‑sector training platform, highlighting both the promise of AI‑augmented insight generation and the risks of opacity, bias, and over‑automation. An audience Q&A surfaced concerns about cultural representation, data‑poisoning, and power asymmetries. The discussion concluded with a call for transparent, human‑centered AI tools and an invitation for continued community collaboration.

Detailed Summary

  • Michael McQuade opened by describing the Emerging Tech Program at the Harvard Kennedy School’s Belfer Center, launched a year earlier to map policy frontiers created by emerging technologies.
  • He introduced MOVE 37, a sub‑project focusing on AI in diplomacy and negotiations, and highlighted the need for cross‑sector collaboration (academia, government, civil society, and industry).
  • The moderator announced the panel’s composition, noting that Gabriela Ramos would join shortly as the chair, and introduced Nandita Balakrishnan, Robyn Scott, and the Belfer researchers Charlie Pozniak and Slavina Ancheva.
  • He also paid tribute to Carme Artigas (absent physically but present via livestream), emphasizing her role in the UN AI Advisory Panel and the Spain‑India AI partnership.

Key Insight

The program frames AI not as a standalone solution but as a policy‑shaping partner for a process that is fundamentally human, high‑stakes, and multi‑dimensional.


2. Framing the Problem – “Complexity of Negotiations”

Presenters: Slavina Ancheva (lead) & Charlie Pozniak

2.1. The Structural Landscape

  • Negotiation as a multi‑layered system:
    • A typical high‑level negotiation may involve seven core parties, their own internal teams, and up to 27 additional stakeholder states that have indirect interests.
    • The EU AI Act negotiation (cited as Carme Artigas’s experience) exemplifies how a single treaty can involve 27 member states plus dozens of external actors.
  • Data volume: A single negotiation can generate thousands of documents, transcripts, and drafts.
  • Human constraints: Limited analyst time, cognitive overload, groupthink, and strict time windows for decisions.

2.2. Conceptual Model of AI Augmentation

  • The presenters outlined a four‑stage pipeline:

    1. Research & Evidence‑Base Construction – gathering, validating, and organizing raw data.
    2. Analysis – extracting patterns, mapping stakeholder preferences, and testing hypotheses.
    3. Strategizing – converting analytical insights into negotiation tactics and option trees.
    4. Execution & Adaptive Feedback – real‑time support (translation, transcription, dynamic briefing) during the negotiation itself.
  • They emphasized that this pipeline is iterative, not linear; insights at any stage may feed back into earlier stages.

2.3. Technical Landscape

  • Beyond simple LLM chatbots:

    • While large language models (LLMs) excel at language generation, they lack accountability, interpretability, and strategic rigor required for diplomatic contexts.
    • The panel highlighted 80‑year‑old toolkits (game theory, decision analysis, supervised/unsupervised learning) that remain essential.
  • Three core challenges in applying AI to diplomacy:

    1. Representation: International negotiations involve fluid institutions and shifting power dynamics that are hard to capture in static models.
    2. Strategic Misrepresentation: Actors may deliberately conceal intentions; AI must handle deception and incomplete information.
    3. Outcome Specification: Defining “success” in multi‑party talks is non‑trivial—different parties prioritize divergent objectives.

2.4. Vision of Future Tools

  • Autonomous research agents that can produce validated source citations, opponent biographies, and preference diagnostics.
  • Strategy sandboxes for “red‑team” simulations, allowing negotiators to test alternative moves before the real table.
  • Real‑time transcription & translation leveraging state‑of‑the‑art speech‑to‑text models.

2.5. Core Ethical Commitments

  1. Human authority remains central – AI cannot make final war‑or‑peace decisions.
  2. Modular transparency – every computational component must expose its reasoning pathway.
  3. Scoped augmentation – tools must be tailored to the specific institutional context and not overstep.

3. Panel Discussion

3.1. Gabriela Ramos (UNESCO) – AI as a Negotiation‑Support Tool

  • Case study: The UNESCO Recommendation on the Ethics of AI, negotiated by 193 countries during the COVID‑19 pandemic.
  • Practical pain points:
    • Tracking country positions across hundreds of comments (55,000 public inputs).
    • Mapping “usual suspects” (e.g., Russia, UK) whose stances often align despite divergent motivations.
  • AI opportunities identified:
    • Position‑tracking dashboards that visually locate each country on a policy spectrum.
    • A knowledge repository of historical negotiation patterns for individual states (e.g., past stances on AI governance).
  • Cautionary note: Existing AI tools (as of 2021) were insufficiently granular; more sophisticated, up‑to‑date models are needed.
  • Human‑in‑the‑loop emphasis: While AI can supply data, the strategic narrative still rests with the diplomat.

3.2. Dr. Nandita Balakrishnan (SCSP) – AI for Intelligence & Decision‑Making

  • Career lens: Spanning academia, public‑sector intelligence, and private‑sector analytics, she observed a “dip” in AI access when moving from research to government work.

  • Current role: Directs AI‑enhanced intelligence assessments that inform both national security and diplomatic strategy.

  • Key observations:

    • Human analysts still perform essential manual synthesis; AI can surface missing data (e.g., an overlooked 10‑year‑old datum that overturns an argument).
    • AI should be viewed as a “data point”, not a final judgment; accountability obliges analysts to explain how an AI output was derived.
  • Three strategic thrusts for the public sector:

    1. Meta‑level framing: AI reshapes the global threat landscape; geopolitics cannot be divorced from AI adoption.
    2. Workforce AI literacy: Embedding AI tools across all federal agencies (State, Defense, Commerce, OPM) – not just in elite “AI units”.
    3. Targeted use cases: Building predictive models for geopolitical events to demonstrate concrete value and justify wider adoption.
  • Recommendation: Deploy AI where it augments human judgement, and retain a human‑final‑review gate for any policy‑impacting output.

3.3. Robyn Scott (Apolitical) – Scaling AI Literacy Across Governments

  • Apolitical’s reach: A network of > ½ million public servants in 160 countries, plus 40 million reached through integrated training platforms.

  • Survey data (5,000 respondents):

    • 90 % view AI as a major opportunity for the public sector.
    • 70 % have launched or plan AI pilots, yet 45 % lack evaluation frameworks.
    • Only 26 % of AI‑using officials understand their own country’s ethical AI guidelines.
  • Key risks identified:

    1. “Pilotitis” – proliferation of small, uncoordinated pilots without rigor.
    2. Black‑box complacency: Over‑reliance on AI accuracy leads to “sleeping at the wheel.”
    3. Human‑algorithm power imbalance: Framing operators as “below the algorithm” (passive users) versus “above the algorithm” (strategic operators).
  • Suggested remedies:

    • Hands‑on experience: Leaders must personally use AI tools to grasp speed and limits.
    • Ethical grounding: Explicit training on national AI ethics frameworks.
    • Psychological safeguards: Maintain a “battle mentality” – consciously interrogate AI recommendations rather than accepting them wholesale.

3.4. Cross‑Panel Synthesis

  • Common thread: All panelists stressed that AI should augment, not replace, human diplomatic judgment.
  • Consensus on data challenges: Need for transparent provenance, cultural and linguistic diversity, and mechanisms to detect data poisoning.
  • Strategic tension: Balancing efficiency gains (e.g., rapid position tracking) with accountability and legitimacy in multilateral settings.

4. Audience Q&A

QuestionerCore IssueSummary of Responses
Sam Dawes (Oxford AI Governance Initiative)Embedding cultural diversity in AI models; safeguarding against data‑poisoning / prompt‑injection.Ramos stressed that language is the primary carrier of culture; recommending multi‑language training data and rigorous source‑tracing. Balakrishnan warned that adversarial negotiations increase poisoning risk and called for continuous testing on ground truth. Scott echoed the need for human oversight and “battle‑ready” skepticism.
Devika Rao (Indian classical dance teacher)Using AI to support cultural education and cross‑regional frameworks (India‑UK).Scott suggested reaching out to the Swiss multilingual LLM initiative as a possible collaboration. Emphasised that context‑relevant content must precede technology deployment.
Arman (J‑PAL South Asia)Power asymmetries: how uneven AI data access could tilt negotiations.Balakrishnan acknowledged this as a key research focus of MOVE 37: designing distributable toolkits that avoid giving any single state a decisive informational edge.
Unidentified audience membersPracticalities of validation and human‑final‑review in AI‑generated assessments.Panelists reiterated the need for explainable outputs, counter‑argument generation, and audit trails to satisfy policy‑maker scrutiny.

No formal announcements of new products or partnerships were made during the session.


5. Closing Remarks

  • Michael McQuade thanked the panelists, the audience, and the MOVE 37 team (including Slavina and Charlie).
  • He emphasized that the project is at an early but pivotal stage, inviting ongoing community contributions.
  • The session ended with a reminder that responsible AI deployment requires iterative testing, transparent governance, and continuous human oversight.

Key Takeaways

  • Negotiations are multi‑layered systems with dozens of actors, massive information flows, and tight time constraints; AI must be able to handle this scale without oversimplifying.
  • LLMs alone are insufficient for diplomatic work; they need to be combined with game‑theoretic, decision‑analytic, and domain‑specific models.
  • Three technical challenges dominate: (1) representation of fluid institutions, (2) handling strategic misrepresentation (deception), and (3) defining success metrics for multi‑party outcomes.
  • Human authority remains non‑negotiable – AI tools must be modular, transparent, and scoped to support, not replace, diplomats.
  • Position‑tracking dashboards and historical knowledge repositories are low‑hanging AI applications that can dramatically reduce cognitive load.
  • Intelligence analysts view AI as a supplementary data point; they require explainability to retain accountability to senior decision‑makers.
  • Public‑sector AI literacy is uneven: while 90 % of officials see AI’s promise, only a minority understand national ethical frameworks, leading to pilotitis and potential misuse.
  • Cultural and linguistic diversity must be baked into training data; otherwise AI risks reinforcing existing power imbalances.
  • Risks of over‑automation include “sleeping at the wheel” (over‑trust in high‑accuracy models) and data poisoning in adversarial negotiations.
  • Power asymmetries in data access could reshape the balance of diplomatic leverage; equitable tool distribution is a core research priority for MOVE 37.
  • Collaborative, iterative design—involving diplomats, technologists, and ethicists—is essential to create AI tools that are both effective and trustworthy.

Prepared from the verbatim transcript of the “AI in Negotiations, Diplomacy, and Conflict Management” session at the Delhi AI conference (2026).

See Also: