AI for Secure India: Combating AI-Enabled Cybercrime, Deepfakes, Darkweb Threats and Data Breaches

Abstract

The panel examined how artificial intelligence is reshaping India’s cyber‑threat landscape. Speakers highlighted the dual nature of AI—its capacity to accelerate economic growth while simultaneously furnishing cyber‑criminals with powerful tools for fraud, deep‑fakes, and ransomware. The discussion moved through macro‑level trends (AI’s contribution to GDP, massive financial losses in 2023‑24), concrete forensic applications, policy gaps, the need for robust privacy safeguards, and the practical challenges faced by police, prosecutors and the judiciary. Recommendations centered on coordinated awareness campaigns, capacity‑building for law‑enforcement, AI‑augmented investigative tools, and a balanced legal regime that protects both victims and the innocent.

Detailed Summary

Prof. Triveni Singh opened by framing AI as a “vibrant and significant” contributor to India’s digital economy. She noted:

  • The IT sector already accounts for roughly 28 % of global IT contribution; domestically, AI is projected to deliver ≈ 20 % of GDP growth in the near term.
  • The UPI ecosystem exemplifies how AI‑enabled services have democratized finance (even a roadside coconut vendor can receive digital payments).

She cautioned that while AI fuels transformation, it also creates “a larger wing” for cyber‑threats—the same technology that streamlines payments also powers sophisticated fraud.

2. AI‑Enabled Cybercrime – The Dark Side

2.1. Criminal Exploitation of AI

  • Synthetic media: AI‑generated videos and voice‑clones are being used for sextortion, impersonation scams, and fraudulent banking calls.
  • Scale of loss: A parliamentary statement cited ₹22,000 crore lost in 2023‑24, with > 2.6 million reported cases of AI‑assisted fraud.
  • Transnational crime networks: Criminal cells operate across the Thailand‑Myanmar‑India border, recruiting locals to act as “mules” for scams targeting India, the U.S., Canada, etc.

Key Insight (Rakesh Maheshwari): “Cyber‑criminals are always a step ahead; AI gives them a scalable weapon.”

2.2. Operational Challenges for Law Enforcement

  • Attribution difficulty: AI can mask origin IPs, making it hard to trace perpetrators across jurisdictions.
  • Resource strain: India’s police force (≈ 1 lakh + 4 k officers) faces an over‑burdened case load, with many FIRs closed without investigation.
  • Skill gap: Traditional police training does not cover rapid AI evolution; officers often lack deep technical knowledge (e.g., workings of UPI, GST data pipelines).

Quote (Tarun Wig): “You cannot investigate a crime without understanding the underlying technology—whether it’s a payment gateway, a domain registration, or a mule‑SIM network.”

3. Policy & Governance – Government’s Role

Rakesh Maheshwari outlined the IT Rules 2021 and the Digital Personal Data Protection Act (DPDPA) as the backbone of AI governance:

  1. Deterrence: Laws create liability for non‑compliance and enable penalties.
  2. Detection & Redress: Platforms must embed mechanisms for fraud detection, user education, and rapid breach reporting.
  3. Governance: Independent bodies (e.g., Data Protection Board) will adjudicate violations.

He emphasized that policy must be outcome‑driven, not merely “check‑box” compliance.

3.2. Gaps Between Policy and Practice

  • Awareness deficit: Rural and Tier‑2/3 populations often remain unaware of AI‑driven scams.
  • Implementation lag: Enforcement agencies lack AI‑enabled tools; many policies are still being operationalized.

Recommendation (Maheshwari): Strengthen ground‑level outreach and develop AI‑aware policing units.

4. Education, Awareness & the Human Factor

Dr. Sapna Bansal stressed the disconnect between academic curricula and field realities:

  • Police equipment shortage: Lack of forensic hardware and AI‑analytics platforms.
  • Need for interdisciplinary training: Criminal investigators must grasp AI, finance, and data science simultaneously.

She highlighted two recent victim cases:

  1. Kanpur senior citizen duped by a deep‑fake video of a relative, losing ₹1 lakh.
  2. Delhi senior scammed via AI‑synthesised child voice, leading to a fraudulent transaction.

Key Insight (Bansal): “Awareness at the grassroots, combined with coordinated education for police and the public, is the most cost‑effective defence.”

5. Technical Forensics – AI as a Detective

Tarun Wig presented concrete examples of AI‑driven forensic work:

  • Massive data volumes: Arrest‑related device imaging can generate 80 TB per case (e.g., income‑tax investigations).
  • Pattern‑recognition models: A lightweight AI model applied to GST transaction data uncovered ₹5,577 crore in potentially fraudulent entries within a 3‑4 month window.
  • Network‑crime mapping: By clustering digital footprints (SIM, IP, financial flows), AI can reveal crime “networks” rather than isolated incidents.

Takeaway (Wig): “AI can sift terabytes of evidence quickly, but we still need human analysts to interpret the patterns.”

Judge Vivek Sood clarified the constitutional and evidentiary dimensions:

  • Privacy as a fundamental right (Justice Puttaswamy v. Union of India). Any AI‑driven surveillance must satisfy strict proportionality.
  • Exception for crime prevention: Privacy can be curtailed for legitimate investigation, but safeguards (audit trails, minimised data retention) are mandatory.
  • Criminal conspiracy law (IPC 1860): Even if multiple actors are distributed globally, the law permits prosecution of all conspirators, provided a mens rea and actus reus can be established.
  • Attribution problems: AI‑generated attacks may lack a clear perpetrator; prosecution must often rely on money‑trail analysis and digital forensics.

Recommendation (Sood): Adopt a balanced regime that protects innocent rights, guarantees due process, and provides law‑enforcement with legally sound data collection methods.

7. Deepfakes & Voice‑Cloning – Emerging National‑Security Threat

Multiple panelists converged on the deep‑fake risk:

  • Election interference: By 2029, AI‑generated political videos could manipulate voter sentiment.
  • Communal tension: Synthetic videos of animal‑sacrifice could inflame religious discord, threatening social order.
  • Detection lag: No AI model currently offers 100 % accuracy in detecting deepfakes; most solutions achieve ≈ 80 % reliability, insufficient for court admissibility.

Practical tips for citizens (Bansal & others):

  1. Cross‑verify with reputable news outlets before sharing a sensational video.
  2. Use reverse‑image/video search to trace original sources.
  3. Verify voice calls by calling back through known numbers.

Key Insight: “Technology outpaces detection; therefore, human vigilance and multi‑source verification remain crucial.”

8. Data Privacy & Protection – The DPDPA in Action

Judge Sood and Rakesh Maheshwari discussed the Digital Personal Data Protection Act:

  • Data minimisation: Collect only what is strictly required; retain for the shortest period needed.
  • Transparency & consent: Users must be informed of data usage; consent must be granular.
  • Breach notification: Organizations must report breaches to the Data Protection Board and directly to affected individuals.

They warned that mis‑use of personal data fuels AI‑enabled scams (e.g., voice‑cloning requires a corpus of the victim’s speech).

9. Recommendations & Roadmap

DomainRecommended ActionLead(s)
PolicyFinalise operational guidelines for AI‑driven fraud detection; integrate AI provisions into IT Rules 2022.Rakesh Maheshwari
Law EnforcementCreate dedicated AI‑forensics units; provide 6‑month intensive cyber‑training for investigators.Tarun Wig, Triveni Singh
JudiciaryDevelop technical courts or expert panels for AI‑evidence admissibility.Vivek Sood
Education & AwarenessLaunch nationwide AI‑security literacy campaigns targeting Tier‑2/3 cities; embed AI‑ethics into school curricula.Sapna Bansal
Private SectorEncourage public‑private partnerships for threat‑intelligence sharing (e.g., income‑tax + fintech).Deepak Kumar Singh
Research & DevelopmentFund deep‑fake detection R&D to push accuracy > 95 %.Tarun Wig
International CooperationStrengthen INTERPOL and mutual‑assistance treaties for cross‑border cyber‑crime pursuit.Triveni Singh
Public‑Private IntelligenceInstitutionalise privatized intelligence hubs (as in the US) to augment government capability.Vivek Sood

10. Closing Remarks

Prof. Triveni Singh summed up the sentiment: AI is a “double‑edged sword.” It can accelerate economic growth, improve fraud detection, and support public‑service delivery, but it also empowers criminal enterprises. The panel urged a co‑ordinated, multi‑stakeholder approach—combining robust policy, skilled law‑enforcement, legal safeguards, and citizen awareness—to ensure that AI serves a “secure India” rather than amplifying its vulnerabilities.

Key Takeaways

  • AI’s economic promise is huge (≈ 20 % of projected GDP growth), but the same technology fuels massive cyber‑fraud (₹22 000 crore loss in 2023‑24).
  • Cyber‑criminals are first adopters of AI, using synthetic media for sextortion, impersonation, and large‑scale scams.
  • Law‑enforcement capacity is severely stretched: police numbers, training, and forensic tools are inadequate for AI‑driven investigations.
  • Policy alone is insufficient; the IT Rules 2021, DPDPA, and forthcoming AI‑specific guidelines need concrete ground‑level enforcement and public outreach.
  • AI‑augmented forensics can process terabytes of data, as shown by GST‑fraud detection (₹5 577 crore identified). Yet human expertise remains essential for interpretation.
  • Privacy is constitutionally protected; any AI surveillance must respect the proportionality test and follow strict data‑minimisation.
  • Deepfakes pose a national‑security risk; detection technology is currently limited to ≈ 80 % accuracy, making human verification vital.
  • Education and awareness—from school curricula to senior‑citizen campaigns—are the most cost‑effective defense.
  • Legal frameworks for conspiracy and attribution exist, but cross‑border cooperation and money‑trail analysis are needed to prosecute dispersed AI‑enabled crimes.
  • Recommendations: create AI‑forensics units, embed AI‑ethics in education, fund deep‑fake detection R&D, develop expert courts, and foster public‑private intelligence sharing.
  • Final mantra: Stay alert, stay aware, stay secure—the three pillars every citizen, organization, and institution must uphold in the AI era.

See Also: