Resilience and Customer Protection in the Age of AI
Abstract
The panel explored how artificial intelligence is simultaneously expanding financial inclusion in India and amplifying the sophistication, scale, and speed of digital fraud. The discussion moved from a macro‑level view of the fraud problem (30 000 crore ≈ $3.6 bn a year, six fraud incidents per minute) to concrete interventions: responsible‑AI principles, behavioural‑science insights, and technology‑driven safeguards deployed by telecom and payments operators. The speakers debated the limits of current regulatory and consumer‑education approaches and highlighted the need for “behavioural public‑goods” – scalable, API‑based tools that can be shared across the ecosystem. Audience questions probed the business case for protective features, the role of government, and the challenges of data sovereignty.
Detailed Summary
Moderator (Kunal Walia) opened with stark statistics:
- India processes trillions of rupees of daily UPI transactions, yet generates a $3.6 bn (≈ 30 000 crore) fraud cost each year.
- Fraud occurs at ≈ 6 incidents per minute, affecting senior citizens, low‑income households, and the broader digital economy.
He framed three discussion pillars:
- AI’s dual impact – boosting inclusion while enlarging fraud vectors.
- Shortfalls of existing interventions (policy, technology, behaviour change).
- Why a behavioural lens is essential for building resilience.
A quick round of introductions followed, after which the panel moved to substantive dialogue.
2. AI as a Driver of Financial Inclusion
2.1 Shamina Singh – The Inclusive‑by‑Design Vision
- Mastercard’s Inclusive Growth agenda (13‑year history) targets one‑billion‑person formal‑economy goal by 2025.
- AI has turned an “analog” economy into a “data‑driven” and now an “AI‑driven” economy, enabling “hockey‑stick” growth in account ownership (from ~40 % to ~80 % of adults globally).
- She emphasized “defence‑by‑design”: products must embed privacy, transparency, and protection from the outset.
2.2 Responsible‑AI Pillars
- Privacy – safeguarding personal data throughout the transaction lifecycle.
- Transparency – clear disclosure of AI‑driven decision logic to users.
- Protection – proactive mechanisms to stop fraud before it harms consumers.
Shamina noted that while regulations exist on paper in many jurisdictions, enforcement gaps leave consumers vulnerable once they experience a first‑few scams.
3. AI‑Enabled Fraud – The Escalating Threat Landscape
3.1 Shweta Singh (Airtel) – Quantifying the Surge
- Global loss: 48 bn (≈ 30 000 crore).
- 75 % of Indians encountered some form of spam/fraud in the past year; the average individual faced ≈ 119 scam attempts across channels (SMS, calls, WhatsApp, e‑commerce, etc.).
- Gen‑AI reduces the marginal cost of deception to near‑zero, enabling rapid creation of deep‑fakes, synthetic voices, and hyper‑personalised phishing attacks.
3.2 Behavioural Exploitation
- Fraudsters now collect contextual data instantly, tailoring language, financial background, and persuasion tactics at scale.
- The attack surface stretches across multiple media (voice, SMS, messenger apps, web), making siloed defenses ineffective.
4. Behavioural Science Insight – Why Human Factors Matter
4.1 Nirat Bhatnagar – The Behaviour‑Technology Nexus
- Key insight: AI has ** lowered the marginal cost of fraud** (e.g., automated voice‑cloning, mass‑generated phishing URLs).
- Fraudsters now traverse silos (WhatsApp → call → email → banking transaction), demanding a cross‑channel, longitudinal defence.
- Resilience = Exposure – Vulnerability. Two individuals with identical exposure can differ dramatically in vulnerability due to behavioural factors.
4.2 Proposed Balance
- A “yin‑yang” equilibrium is required: technology can flag suspicious patterns, but behavioural nudges are needed to build consumer immunity.
- Behavioural interventions must be scalable; otherwise, they remain a boutique solution for a limited segment.
5. Inclusion, Data, and the Need for Customer Protection
5.1 Pawan Bakhshi (Gates Foundation) – From Account Opening to Active Use
- India’s ID, payment, and data‑exchange infrastructure (Aadhaar, UPI, DigiLocker, Account Aggregator) provides rich data for AI, but only ~50 % of adults use bank accounts regularly.
- The foundation now integrates “customer protection” (privacy, consent, data security) into its inclusion strategy, recognizing that trust is the prerequisite for usage.
5.2 Dual‑Level Approach
- System‑level – strengthen regulation, policy, and technology standards.
- Consumer‑level – promote behavioural change, though she warned that mass‑training alone is insufficient given the diversity of user contexts.
6. Airtel’s Technology‑Driven Consumer Safeguards
6.1 Product Innovations (Shweta Singh)
| Feature | Description | Impact (as of Jan 2026) |
|---|---|---|
| Real‑time spam detection (voice & SMS) | Calls/SMS flagged in‑call with “Spam” warning; user can choose to answer or not. | 71 bn calls & 3 bn SMS flagged; fraud volume reduced ≈ 70 % (validated by Ministry of Home Affairs, I4C). |
| Malicious‑URL blocking | Detects fraudulent links across any Airtel‑served channel (SMS, WhatsApp, Instagram, etc.) before they open. | Prevents inadvertent clicks that lead to scams. |
| OTP‑on‑fraud protection | If an OTP arrives while the user is on a suspicious call, Airtel pauses delivery and asks the user to confirm the sender. | Introduces a “timely pause” that stops duress‑based OTP leakage. |
| Business‑Name display | Shows the registered company name for promotional calls, aiding user decision‑making. | Improves call‑acceptance discrimination. |
- Design philosophy: Identify behavioural traits of fraudsters (e.g., >300 unique daily calls, recent SIM swaps) rather than trying to tag specific individuals.
- Ecosystem view: Fraud spans multiple platforms; a single‑channel fix is insufficient.
6.2 Business Rationale
- Features are free for customers, yet they generate brand‑trust dividends (Airtel rebranded as “the safe network”).
- While direct ARPU uplift is hard to isolate, customer loyalty and churn mitigation are observed qualitatively.
- Airtel collaborates with regulators (RBI, TRAI, MHA‑I4C) and industry bodies to push ecosystem‑wide standards.
7. Mastercard’s Redress‑Centric Approach
7.1 Shamina Singh – Embedding Protection in the Product
- Mastercard blocked 160 bn fraudulent transactions in 2024, a number expected to quadruple by 2026.
- Built‑in redress: If a cardholder is defrauded, the payment product automatically reimburses the victim. This shifts liability from the consumer to the issuer.
- Emphasised that consumer education alone is insufficient; the system must be “invisible”—working seamlessly in the background so the user never notices a friction point.
7.2 Market‑Driven Incentive
- Protecting customers drives satisfaction, which directly ties to Mastercard’s business case.
- AI should be leveraged to detect and stop fraud pre‑emptively, not merely to alert after the fact.
8. Defining Resilience & Scaling Behavioural Solutions
8.1 Nirat Bhatnagar – From Micro‑Evidence to Macro‑Deployment
- Resilience = Exposure – Vulnerability; reducing vulnerability requires behavioural nudges across the transaction lifecycle.
- Authorized‑Payment‑Push (APP) fraud is rising in mature markets (UK, Singapore) – users willingly authorize transfers that later turn out malicious, highlighting a behavioural gap.
8.2 Two‑Step Scaling Model
- Micro‑scale evidence: Identify high‑impact interventions (pre‑bunking, inoculation, bridging intention‑action gaps). These show large effect sizes but decay after six months.
- Macro‑scale deployment: Develop behavioural public‑goods (APIs, open‑source stacks) that smaller fintechs, banks, and telcos can plug into without building costly in‑house teams.
- Data ownership: Collective, non‑privatised data on fraud patterns should be shared across the ecosystem to keep pace with evolving AI‑driven scams.
9. Audience Q&A
| Question | Respondent(s) | Core Points |
|---|---|---|
| Adoption Curve vs. Fraudster Speed (entrepreneur) – Why did Airtel launch the protective features? | Shweta Singh | Needed a first‑line‑of‑defence; internal data‑science team built intel engine; rapid roll‑out in Sep 2024; overwhelming positive user feedback despite no direct revenue gain. |
| Business Impact – Any ARPU or churn improvement? | Shweta Singh | Hard to isolate ARPU uplift, but customer trust increased; brand repositioned as “safe network”; indirect churn reduction inferred. |
| Government Role & Collaboration (audience) – How should policy support behavioural public goods? | Pawan Bakhshi, Shamina Singh, Nirat Bhatnagar | - Government can codify behavioural APIs (akin to India‑Stack). - Must ensure data sovereignty while allowing secure cross‑border cooperation. - Public‑good funding and regulatory sandboxes enable rapid scaling of behavioural tools. |
| Data Sovereignty – Risks of foreign‑hosted AI infrastructure? | Shweta Singh | Critical data should stay on Indian‑owned CSPs; government urged to mandate local hosting for state‑level datasets. |
| Decision‑Making for Courageous Interventions – How did Airtel commit? | Shweta Singh | Executives weighed customer welfare vs. short‑term cost; chose “no‑cost” features, leveraged existing network analytics, secured regulator support, and communicated the safety narrative to the market. |
10. Closing Reflections
- Scale of the problem is massive; multi‑stakeholder coordination (regulators, telcos, fintechs, payments firms, behaviour scientists) is essential.
- Technology alone cannot close the gap; behavioural public goods are needed to make protection affordable at scale.
- The panel called for more frequent, cross‑sector dialogues to share lessons, harmonise standards, and keep the onus on providers rather than on vulnerable consumers.
Key Takeaways
- AI fuels inclusion and fraud simultaneously – the same data pipelines that enable mass‑account opening also lower the cost of sophisticated scams.
- Financial‑inclusion gains are fragile unless built with privacy, transparency, and protection from the start.
- Gen‑AI (deep‑fakes, synthetic voices, automated phishing) has made fraud scalable, personalized, and rapid (≈ 6 attacks/minute in India).
- Behavioural vulnerability, not just exposure, determines whether a user falls victim; resilience can be raised by targeted nudges and inoculation.
- Airtel’s real‑time, multi‑channel safeguards (spam detection, malicious‑URL block, OTP‑pause, business‑name display) cut fraud by ~70 % and boosted brand trust.
- Mastercard’s product‑level redress (automatic reimbursement) shifts liability to the issuer, illustrating a business‑case for consumer protection.
- Regulatory frameworks exist but are poorly enforced; effective protection requires implementation and continuous monitoring.
- Behavioural public‑goods (API/stack models) can democratise access to high‑impact nudges, allowing small fintechs and NGOs to embed protection without large teams.
- Government can catalyse scaling by treating behavioural safeguards as part of the national digital‑infrastructure (India‑Stack‑style) and by mandating data localisation for critical datasets.
- Cross‑industry collaboration—sharing fraud‑pattern data, coordinated standards, and joint consumer‑education campaigns—is the only realistic path to a resilient, inclusive AI‑driven financial ecosystem.
See Also:
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- democratizing-ai-resources-in-india
- ai-for-economic-growth-and-social-good-ai-for-all-driving-economic-advancement-and-societal-well-being
- ai-for-everyone-empowering-people-businesses-and-society
- democratizing-ai-resources-and-building-inclusive-ai-solutions-for-india