Practical Aspects of Using AI in Daily Life
Abstract
The panel explored why “sovereign AI” has become a global priority, especially in the context of rising geopolitical tensions and rapid advances in generative AI. Participants examined the technical, regulatory, and strategic dimensions of AI sovereignty across layers—from data and cloud infrastructure to algorithms and autonomous systems. Defence, policing, and industry perspectives highlighted concrete use‑cases (persistent surveillance, cognitive warfare, autonomous weapons, law‑enforcement analytics) and the risks of bias, dependency on foreign compute, and loss of control. The discussion concluded with contrasting views on whether AI sovereignty should be pursued through sweeping, all‑layer reforms or incremental, sector‑by‑sector steps.
Detailed Summary
- Moderator (Abhishek) opened with a reminder of the current geopolitical climate, noting that “sovereign AI for national security” is a pressing issue.
- He highlighted two drivers:
- Geopolitical pressure – nations are increasingly wary of foreign AI dominance.
- Technological acceleration – recent releases of “GPT‑5.3 codecs” and “GPT‑4.6” demonstrate exponential capability growth, including AI that can partially write itself.
- An anecdote was shared: a college‑aged relative showed a technical document claiming “GPT‑5.3 is our first model that was instrumental in creating itself,” underscoring the self‑evolving nature of modern models.
2. Pierre Stefano (KPMG) – The Global Landscape of Digital Sovereignty
| Key Points | Details |
|---|---|
| Emotional debate | Pierre observed that discussions on AI sovereignty are “highly emotional,” especially in Europe, and have spread to Canada, the Middle East, India, Australia, and Africa—except the US and China. |
| Need for factual grounding | He stressed the importance of moving beyond feelings to “facts and figures” to define what sovereignty actually means (infrastructure, applications, models, chips, data residency). |
| Layer‑by‑layer approach | Sovereignty must be dissected: • Data level – Not all government data needs to be on‑prem; public cloud can be viable for non‑confidential data. • AI‑cloud interdependence – No AI without cloud; cloud choices affect sovereignty. |
| Sector‑specific sensitivity | Defense, national security, healthcare, taxation are identified as domains where strict sovereignty is essential. |
| Analogy | “You can’t just turn off one tap and turn another on” – sovereignty requires comprehensive, not selective, control. |
Takeaway: Sovereignty is multidimensional; policymakers must define which layers (data, compute, models, chips) require domestic control and which can be outsourced safely.
3. Harsh Shibar (Indian Army) – Sovereignty in Defence AI
3.1 Conceptual Foundations
- AI expands human cognitive limits beyond “bounded rationality.” It can synthesize massive data, but remains inductive (lacks abductive reasoning).
- Military decisions are high‑cost; reliance on purely inductive AI can be hazardous.
3.2 Four Defence Domains
| Domain | Sovereignty Concerns & Risks | Illustrative Example |
|---|---|---|
| Persistent Surveillance | Risk of automation bias – operators over‑trust AI outputs. | Reference to 1983 Soviet “Petrov” incident where a false alarm could have triggered nuclear launch; human scepticism averted disaster. |
| Cognitive Warfare | Need for superior home‑grown algorithms to both defend against enemy propaganda and craft own narratives. | AI can generate narratives at “lightning speed”; sovereignty means owning the algorithmic stack. |
| Autonomous Systems | Threat of adversarial attacks (poisoned data, backdoors) on mission‑level autonomous platforms. | Emphasis on the distinction between platform autonomy (missiles) and mission autonomy (coordinated autonomous swarms). |
| Decision Support | Three decision types: • Deterministic – optimisation, logistics – AI can be trusted. • Stochastic – probabilistic scenarios (weather, civil unrest) – AI aids simulation. • Uncertainty / Fog of War – AI can assist but should not be the final decision‑maker. | Calls for a benchmarking suite to test AI suitability for each decision type. |
3.3 Key Recommendation
- Build a testing‑and‑benchmarking platform to certify AI systems for military use before deployment.
4. Ajay Singal (IPS, Haryana) – Policing & AI Sovereignty
- Self‑identification: “We are technologically ill‑iterate compared with criminals who are a step ahead.”
- Historical analogy: Innovations (electricity, internet, printing press) act as force‑multipliers rather than mere efficiency tools.
- AI hype vs. reality: Cites Nandan Nilekani’s caution on hype cycles; notes AI is projected to influence $10 trillion of global economy by 2030.
- Policing challenges:
- Real‑time response – AI can accelerate crime detection but also speeds up criminal adaptation.
- Historical lag: Police historically react after tech adoption (e.g., internet, mobile); AI demands proactive policy.
- Five‑layer “Ekema” model (energy, chips, infrastructure, models, applications) – sovereignty must span all layers; failure at any lower layer compromises the whole stack.
- Risks to law‑enforcement:
- Power outages hitting AI‑enabled emergency response.
- External control of chips/compute throttling analytics.
- Foreign cloud jurisdictions exposing sensitive data.
- Environmental costs (energy, water, carbon).
5. Bridges Singh – Data Bias, Incremental vs. Whole‑Stack Sovereignty
- Definition of sovereignty: “Anything you cannot develop, deploy, govern, audit, secure, license on your own terms is not sovereign.”
- Kill‑switch risk: External providers could shut down critical AI services (parallels drawn to the SWIFT freeze on Russia).
- Data bias problem: Large language models trained on Common Crawl (Reddit, Wikipedia) inherit Western‑centric bias; they under‑represent regional languages (Bhojpuri, Malayalam, Tamil, Marathi) and cultural contexts.
- Cultural erasure: AI flattening of local knowledge threatens diversity; example of facial‑recognition error rates up to 40 % on “brown” faces.
- Governance differences: Contrasts between Western individual‑centric AI ethics and Indian “Dharmic” societal principles.
- Strategic suggestion: Build a national cognitive public infrastructure—domestic compute, data, and model pipelines—to avoid reliance on foreign “AI wrappers.”
- Compute as geopolitical lever: GPUs and high‑performance compute are now “strategic assets”; sovereignty must include indigenous compute capacity.
6. Martin (Terra Data) – Industry View on Layered Sovereignty
- Layered approach reaffirmed: Sovereignty must be addressed across every stack—data, compute, models, applications.
- Data‑centric sovereignty: Nations need data‑ownership frameworks (local storage, licensing, audit trails) to prevent foreign extraction.
- Compute demand illustration: In healthcare, a “personal health‑agent” could issue thousands of queries per second, illustrating massive future compute loads.
- Readiness assessment: Current Indian compute ecosystem is not yet sufficient for full‑day‑zero sovereign AI; incremental building of layers is advisable while simultaneously investing in domestic HPC and cloud capabilities.
7. Closing & Open Floor
- The moderator thanked participants and opened the floor for audience questions (the transcript ends before any Q&A material appears).
Key Takeaways
- Sovereign AI is a multidimensional construct that spans data residency, compute infrastructure, model ownership, algorithmic transparency, and licensing.
- Emotions dominate the discourse, but policymakers must ground decisions in concrete layers and risk assessments.
- Defence applications demand rigorous testing: persistent surveillance must guard against automation bias; autonomous weapons need protection from adversarial attacks; decision‑support AI must be limited to deterministic or stochastic scenarios, not full uncertainty.
- Law‑enforcement faces a “five‑layer” challenge—energy, chips, infrastructure, models, applications—requiring sovereignty at every tier to avoid single‑point failures.
- Data bias threatens national identity: models trained on predominantly Western corpora misrepresent local languages and cultures, leading to poor performance and potential cultural erasure.
- External reliance is a strategic vulnerability: kill‑switches, SWIFT‑style sanctions, and cloud‑jurisdictional exposure illustrate why full control (or at least trusted escrow) is essential.
- Compute capacity is now geopolitically strategic; building domestic GPU/HPC capabilities is a core component of AI sovereignty.
- Incremental vs. whole‑stack approach: industry experts (Bridges Singh, Martin) suggest a pragmatic blend—start with critical layers (data and compute) while progressively extending sovereign control across the stack.
- Benchmarking & certification for AI systems (especially in defence) are needed to ensure that only vetted models are deployed.
- The original workshop brief does not match the actual session; the panel discussed sovereign AI rather than hands‑on AI tools for daily life.
Prepared by the AI Conference Summarization Team.
See Also:
- thriving-with-ai-human-potential-skills-and-opportunity
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- welfare-for-all-ensuring-equitable-ai-growth-across-the-worlds-largest-and-oldest-democracies
- artificial-general-intelligence-a-new-paradigm-of-safety-security-privacy-ethics-and-governance
- beyond-the-cloud-the-sovereign-ai-moment
- ai-for-democracy-reimagining-governance-in-the-age-of-intelligence
- ai-for-the-good-of-all-brazilian-perspectives-on-the-future-of-ai
- mahaai-building-safe-secure-and-smart-governance