Defence Perspective in AI

Detailed Summary

  • The moderator introduced the panel, noting that AI is a hot topic across consumer, commercial and defence domains.
  • She highlighted recent diplomatic milestones (e.g., French President Emmanuel Macron’s visit, five defence‑related outcomes, and US‑India AI cooperation) to underline the strategic importance of AI for national security.
  • A brief reminder was given that the panel would have only 55 minutes, so each speaker would receive roughly seven minutes.

2. Keynote Address – Lt Gen Vipun Shinghal (Deputy Chief of Army Staff, IS&T)

2.1. AI as a Force Multiplier & Ethical Imperative

  • Described AI as a “force multiplier” that can improve battlefield tactics, logistics, and equipment performance.
  • Stressed that human judgment cannot be delegated to algorithms; ultimate responsibility remains with commanders.

2.2. Illustrative Operational Anecdote

  • Recounted a high‑tempo operation where an AI‑generated strike recommendation (high confidence, short decision window) was over‑ruled because the system lacked data on an ongoing civilian evacuation.
  • The commander’s pause prevented civilian casualties, underscoring the need for human situational awareness.

2.3. Four Critical Questions for Military Leaders

#Question
1Which decisions must stay with commanders and which can be delegated to data/algorithms?
2Do current rules of engagement and safety frameworks keep pace with data‑driven systems? (concern about “black‑box” opacity)
3Are AI‑enabled systems subjected to the same rigorous testing, fielding, and red‑team evaluation as conventional weapons?
4How do we prepare commanders to work with machines that can out‑speed human cognition without ceding moral responsibility?

2.4. National AI Governance Guidelines (India)

  • India’s AI‑governance framework (released during the summit) identifies AI as probabilistic, generative and adaptive, warning of unintended consequences.
  • Risk 6 – National Security: disinformation, cyber‑attacks on critical infrastructure, and lethal autonomous weapons (LAWS) are highlighted as looming threats.

2.5. Institutional Commitment

  • The Indian Army has declared 2024 the “Year of Data‑Centricity and Networking.”
  • AI is being integrated into decision‑support, surveillance, RECI (Reconnaissance‑Electronic‑Combat‑Intelligence) and other functional domains.
  • Collaboration is ongoing with start‑ups, academia, and industry; a dedicated expo booth showcases these initiatives.

2.6. Principles for Responsible AI Adoption

  1. Institutionalise human control (legal definition of what is AI‑assisted vs. AI‑decided).
  2. Treat AI as a weapon system – apply identical certification, red‑team, and trial‑evaluation processes.
  3. Ensure data sovereignty and trust – commanders must verify data provenance.
  4. Evolve leadership development – commanders need training to interrogate AI outputs and retain moral responsibility.

2.7. Historical Analogy & Outlook

  • Compared AI governance to past regimes (NBC weapons, land‑mines, Geneva Conventions) that survived because of human‑centred restraint.
  • Expressed confidence that international conventions on conventional weapons will eventually encompass AI.
  • Concluded with a cultural note: “Shakti must go hand in hand with Dharma” – power must be balanced by ethical restraint.

3. Panel Discussion

3.1. Subimal Bhattacharjee – AI Integration in ISR & Decision‑Support

Sub‑topicKey Points
Bounded RationalityHumans suffer from incomplete information, cognitive limits, time pressure, and limited attention; AI can mitigate these constraints.
Limitations of Current LLMs
  • Pre‑trained, no real‑time learning – unsuitable for dynamic battlefields.
  • Probabilistic token prediction; fails on computational‑heavy tasks.
  • Context‑sensitivity – performance collapses when operating context changes.
  • Lack of abductive reasoning (reasoning with incomplete facts).
Ethical FoundationsAI must be examined not only technologically and ethically but also psychologically/philosophically (human‑machine trust).
Case Study – Israel‑PalestineIsraeli Defence Forces used an AI engine with a 90 % confidence threshold to target low‑level operatives; a 10 % error rate could have caused ~3,700 civilian deaths.
Historical Parallel – My LaiHuman‑driven atrocities illustrate that attribution (who is responsible) is clearer when a person makes the decision, unlike opaque AI systems.
Command ResponsibilityResponsibility is absolute; cannot be delegated to machines or even to subordinate humans. AI should remain a tool, not a decision‑maker.

3.2. Prof. Ramakrishna – Global AI‑Centric Military Capabilities

  • Stressed that no command centre trusts a black‑box; transparency is essential.
  • Highlighted AI’s strengths in situational summarisation, multi‑sensor pattern detection, and generating alternative courses of action.
  • Introduced the concept of provenance (origin of data and reasoning trails) borrowed from database theory, arguing that decision‑makers need traceability.
  • Discussed observability stacks (monitoring GPU/CPU usage, network traffic) to prevent data leakage in air‑gapped environments.
  • Cited Stanford’s Snorkel (data‑programming) and India’s open‑source Sphere platform that let human‑written programs override ML outputs, improving trust.

3.3. Maj Gen Pawan Anand (Retd.) – Governance, Regulation & Rapid Deployment

IssueInsight
Policy Lag vs. AI VelocityAI evolves faster than traditional weapons; governance cannot afford the usual lag.
Triple ConstraintComplexity, Urgency, Risk (lethality) – all must be balanced in deployment.
Reliability RequirementMilitary AI must be deterministically reliable; a malfunction can cause loss of life.
Life‑Cycle ResponsibilityHuman‑in‑the‑loop oversight must be embedded from ideation through design, development, deployment, and de‑commissioning (“responsible AI at every stage”).
Bias & Drift DetectionContinuous monitoring for algorithmic bias and performance drift is mandatory.
Operational EnvelopesDefine bounded envelopes for weapon‑system use; enforce internationally through confidence‑building measures (CBMs).
International CoordinationOngoing work at RE‑AIM summit (Spain) and bilateral/tri‑lateral dialogues to align national policies with global norms.

3.4. Dr Vikram Jayaram – Building an AI‑Ecosystem for Defence

  • Described a cluster model: deep‑tech start‑ups + nodal agencies + military + system integrators + domain experts.
  • Emphasised ROI for defence is security, not profit: faster intelligence, effective decision‑support, rapid prototyping.
  • Outlined an iterative development cycle (build → test → refine) akin to medical‑device pipelines (basic → clinical → approved).
  • Mentioned the IDEX programme which accelerates start‑up participation and rapid field testing.
  • Stressed the need for bidirectional embedding: founders must understand military nuance, and soldiers must understand start‑up culture.
  • Highlighted that generative AI is just a slice of AI; deterministic algorithms, game theory, and classic data‑science remain crucial.

3.5. Mrs Madhumita Mohapatra – AI‑Enabled Defence Supply‑Chain

Focus AreaSummary
From Forecast‑Only to Predict‑AdaptModern supply chains must shift from reactive stocking to sensing, predicting, and adapting in real time.
India‑Specific ChallengesGeography (jungle, oceanic borders) and multiple threat vectors demand resilient, autonomous logistics.
Global Best Practices
  • UK & US defence depots use AI‑driven robotics, vision systems, and intelligent slotting for warehousing.
  • UK MoD’s digitisation program embeds AI in demand forecasting and automated fulfilment.
  • Russian‑Ukrainian conflict showcases AI‑based route optimisation for convoy resilience.
Indigenisation GoalAI‑driven logistics should not simply copy foreign models; it must support Atma Nirbhar Bharat, creating sovereign, resilient supply‑chain capabilities.

3.6. Audience Q&A (Selected Exchanges)

QuestionerThemeAnswer Highlights
Unidentified audience member (referencing “Sindhul” operation)Hardware dependence (GPU imports)• Diversify sources; develop CPU‑friendly small LLMs; invest in indigenous semiconductor capability (four lead agencies).
• Emphasised the late‑mover advantage: ability to design hardware around software workloads (e.g., Mixture‑of‑Experts models).
Mr Pradeep Gupta (brief interjection)Acknowledged and promised a follow‑up conversation after the session.

3.7. Closing Remarks

  • The moderator thanked all panelists and highlighted the collective commitment of industry, academia, and the armed forces to build a responsible AI ecosystem.
  • Reiterated the importance of rapid yet safe deployment, human‑in‑the‑loop governance, and indigenous capability development.

Key Takeaways

  • Human judgment is non‑negotiable: AI can recommend, but commanders must retain ultimate authority, especially for lethal actions.
  • AI as a force multiplier: It can alleviate bounded rationality (incomplete info, cognitive limits, time pressure, attention span) but must be transparent, test‑validated, and human‑controlled.
  • Governance must be embedded throughout the AI life‑cycle – from concept to decommissioning – with explicit legal definitions of human‑only decision points.
  • Provenance, observability, and attribution are essential to build trust; black‑box models alone are insufficient for mission‑critical use.
  • International norms are evolving (UN conventions on conventional weapons, RE‑AIM dialogues); India must align domestic policy while preserving strategic autonomy.
  • Indigenous data‑centric warfare: 2024 is declared the “Year of Data‑Centricity”; AI is treated as a weapon system requiring rigorous certification.
  • Industry‑military‑academic clusters (e.g., IDEX, start‑up ecosystems) enable rapid prototyping, iterative testing, and real‑world feedback.
  • Supply‑chain AI must move from static forecasting to dynamic, predictive, and adaptive logistics, tailored to India’s unique terrain and security environment.
  • Hardware sovereignty is a strategic priority: diversify GPU imports, develop CPU‑friendly models, and accelerate domestic semiconductor production.
  • Ethical legacy: Past weapons conventions (NBC, land‑mines, Geneva) demonstrate that ethical restraint can keep pace with technological change; the same principle should guide AI adoption.

See Also: