Consumers at the Core: Building AI with Consumer-Centric Approach
Abstract
The panel examined how artificial intelligence (AI) is reshaping the relationship between consumers and the marketplace, highlighting both empowering possibilities (personalised services, 24‑hour digital assistance, multilingual support) and emerging risks (opaque decision‑making, deep‑fakes, hyper‑personalised targeting, algorithmic bias and privacy erosion). Representatives from government, civil‑society, and industry discussed concrete Indian examples—bank‑level fraud detection, telco voice‑chatbots, grievance‑redressal platforms, and AI‑driven public‑service tools—and debated policy levers. A consensus emerged around five (later six) “pillars” for consumer‑centric AI: transparency, accountability, fairness & non‑discrimination, data‑privacy, accessibility & inclusion, and consumer awareness of rights. The session closed with a brief Q&A, recommendations for regulators and industry, and a call for coordinated action to embed trust at the core of AI governance.
Detailed Summary
The moderator welcomed the audience, thanked the Government of India and the Ministry of Electronics & IT, and highlighted that this was the only session in the summit focused entirely on consumer empowerment. A short framing slide was presented that listed four key ways AI can empower consumers and four major emerging risks:
- Empowering aspects – personalized recommendations, 24‑hour digital assistants, real‑time risk monitoring, and multilingual AI (regional‑language support).
- Risks – opaque automated decisions, deep‑fake audio/video, hyper‑personalised targeting, and AI “hallucinations” that deliver wrong outputs.
Two illustrative Indian cases were cited:
- Banking fraud‑detection AI – how AI‑driven monitoring has cut fraudulent transactions.
- Telco voice‑chatbot ecosystems – regional‑language bots that improve accessibility but also raise privacy concerns.
The moderator then invited the keynote speaker, identified in the transcript as Mr. Oitkumar Singh (actually Mr. Rohit Kumar Singh, former Secretary of Consumer Affairs).
2. Government Perspective – Rohit Kumar Singh
Background & Credibility – Singh opened by thanking the organizers, noting his recent retirement from the civil service and his past roles as Secretary, Department of Consumer Affairs, and Member of the NCDRC.
Historical Context – He compared the current AI race (U.S., China, private LLM developers) with the Non‑Aligned Movement of Indira Gandhi’s era, urging India to adopt a “middle‑game” stance and become an aggressive champion of responsible AI.
Power‑Shift Narrative –
- Past – In‑person “kirana” stores gave consumers bargaining power.
- E‑commerce era – Platforms like Amazon and Flipkart reversed that balance, concentrating power with sellers.
- AI era – The next shift could aggravate the power imbalance unless safeguards are built.
Risks of Opaque AI –
- Dark‑patterns – UI designs that nudge consumers toward unwanted actions.
- Algorithmic discrimination – Credit‑scoring or insurance pricing that penalises people based on location, phone type, or battery level.
- Privacy erosion – Example of a Decathlon store demanding a phone number to complete a purchase; the anecdote illustrated how POS systems embed personal data by default.
Trust as Currency – Singh argued that consumer trust is the “currency of AI.” Over‑regulation at early stages could stifle innovation, but once AI becomes entrenched, regulation is essential to avoid systemic harm.
Five Pillars of Consumer‑Centric AI (as proposed by Singh):
- Transparency – Consumers must know whether they are interacting with a bot, what data is being used, and whether reasoning (vs. pattern‑matching) underlies responses.
- Accountability – Clear liability when AI‑driven outcomes cause harm (e.g., suicide prompts, weapon‑making instructions).
- Fairness & Non‑Discrimination – Mitigate bias from training data that over‑represents Western contexts.
- Privacy & Data Protection – Safeguard personal data throughout the AI lifecycle.
- Accessibility & Inclusion – Ensure AI reaches remote, linguistically diverse, and underserved populations (e.g., AI applications in education, health, agriculture).
He closed by noting that human capability cannot keep pace with exponential AI growth, stressing the need for collective decisions on whether consumers become “data points” or “empowered participants.”
3. Additional Pillar – Consumer Awareness (Moderator)
The moderator interjected to add a sixth pillar:
- Consumer Awareness & Rights – Consumers must understand what data they submit and what rights they hold, ensuring an informed consent framework.
4. DARPG Initiatives – Sarita Chauhan
Sarita Chauhan, representing the Department of Administrative Reforms and Public Grievances (DARPG), outlined the government’s digital‑public‑infrastructure ecosystem:
- Digitisation as foundation – AI models are layered on top of e‑services, enabling transparent, inclusive, and last‑mile delivery.
- Grievance‑centric approach – AI‑driven analytics help process citizen feedback, directly feeding into policy improvement.
- Award mechanisms – National e‑Governance Awards (since 2003) now include categories for AI & emerging technologies (e.g., Bhasini, Dji Yatra).
- Case studies –
- NADRIS – AI/ML model by ICAR predicting livestock diseases, protecting livelihoods.
- Vokasati – Assam‑based chatbot platform offering one‑stop service access and faster grievance resolution.
She emphasized “maximum governance, minimum government”, stressing that AI should enable “sabka saath, sabka vikas, sabka prayas.”
5. Global Civil‑Society View – Erin Parsons
Erin Parsons, from Consumers International, presented a comparative, rights‑based framework:
- UN Consumer Protection Guidelines – The baseline for consumer rights worldwide.
- Three Core Principles for AI –
- Transparency – Users must be aware they are interacting with AI and understand data usage.
- Data‑Privacy – Move from individualized consent to a social‑good view of data, recognizing data’s collective nature.
- Adoption & Quality – Not just uptake of AI tools, but ensuring high quality and consumer resilience.
She highlighted the heterogeneity of AI experiences across the 100+ member organisations, especially between the global north and south, and underscored the need for global trends to inform minimum‑standard frameworks that transcend jurisdictional differences.
6. Consumer‑Voice Perspective – Ashim Sanyal
Ashim Sanyal (Consumer VOICE) answered a moderator‑posed question on the consumer decision‑making lifecycle and where AI exerts influence:
- Key touchpoints – identification, consideration, purchase, post‑purchase service.
- Risks –
- Algorithmic bias (e.g., price discrimination based on device type, battery level).
- Dark‑patterns that subtly steer choices.
- Deep‑fake manipulation influencing financial decisions.
- Recommendations for Consumer Organisations – focus on transparency, accountability, bias‑audit, and enforcement under the Consumer Protection Act, especially for high‑risk AI in finance and e‑commerce.
He also cited a government‑run AI‑enabled grievance‑redressal chatbot (Grahaq NIA) that has processed refunds worth INR 46 crore in 17 languages, illustrating a positive example of AI for consumer benefit.
7. Industry View – Sudeep Das (DoorDash)
Sudeep Das, leading AI at DoorDash (previously Netflix), described how industry can align profit motives with consumer‑centric values:
- Consumer‑obsessed product design – Long‑term value derives from trustworthy experiences.
- Embedding safeguards into the product stack –
- Transparency and safety metrics are part of KPI dashboards.
- Bias mitigation – Partner with academic researchers to audit and de‑bias models from day one.
- Accessibility – Optimize performance for low‑end devices and poor network conditions.
- Red‑Team Exercises – Proactive adversarial testing of generative/agentic AI to prevent harmful outputs (e.g., instructions for self‑harm).
- Regulatory timing – Light regulation at early stages to foster innovation; tighter controls once technology matures to protect trust.
He concluded with a call for industry‑government‑civil‑society collaboration to set minimum standards while preserving innovation.
8. Q&A (Brief)
The moderator allotted ≈2 minutes for audience questions. Highlights:
- Sarita Chauhan was asked to elaborate on DARPG’s AI use in grievance handling, reaffirming the AI‑driven voice‑recognition system (17 languages) and the “e‑Jagrity” platform for virtual hearings.
- Erin Parsons was asked about global harmonisation, reiterating the need for a baseline rights‑based framework beyond regional compliance.
- Ashim Sanyal fielded a query on sector‑specific vulnerabilities, emphasizing fintech, e‑commerce, and service sectors (e.g., ATM cash‑dispensing discrepancies).
9. Closing Remarks
The panel wrapped up with several key reflections:
- AI is not “free” – it extracts data and may later monetize it at higher cost.
- Trust, transparency, and accountability must be baked into AI governance.
- Collective action across government, industry, and consumer organisations is essential to prevent dark‑patterns, deep‑fakes, and algorithmic discrimination.
The moderator announced the next panel (IIM Lucknow) and thanked the Ministry of Electronics & IT for facilitating the discussion.
Key Takeaways
- AI can both empower and exploit consumers; the balance hinges on how transparent, accountable, and fair the systems are.
- Five (later six) foundational pillars for consumer‑centric AI were proposed: transparency, accountability, fairness/non‑discrimination, privacy, accessibility/inclusion, and consumer awareness of rights.
- Government initiatives (DARPG, NCDRC, AI‑enabled grievance portals) illustrate how public‑sector AI can improve service delivery while exposing privacy and bias risks.
- International civil‑society (Consumers International) stresses a rights‑based framework, highlighting divergent global experiences and the need for a baseline standard.
- Industry leaders (DoorDash, Netflix) argue that consumer trust is a long‑term profit driver; they embed safeguards (bias audits, red‑team testing, accessibility metrics) directly into product development.
- Regulatory timing matters – light‑touch regulation encourages early innovation, but mature AI deployments must be governed to protect consumers from harm.
- Real‑world examples (bank fraud‑detection AI, telco regional chat‑bots, AI‑driven grievance redressal that refunded INR 46 crore) demonstrate both benefits and pitfalls.
- Algorithmic discrimination can manifest in subtle ways (e.g., price changes based on phone model or battery level) and must be actively monitored.
- Consumer awareness is essential; without informed consent, users cannot meaningfully exercise rights over their data.
- Collaboration across government, industry, and consumer organisations is the only viable path to embed trust at the core of AI governance and ensure that AI serves, rather than subjugates, the consumer.
See Also:
- shaping-secure-ethical-and-accountable-ai-systems-for-a-shared-future
- navigating-the-ai-regulatory-landscape-a-cross-compliance-framework-for-safety-and-governance
- ai-impact-forum-democratising-ai-resources
- power-protection-and-progress-legislators-and-the-ai-era
- enterprise-adoption-of-responsible-ai-challenges-frameworks-and-solutions
- ai-for-fraud-prevention-and-financial-inclusion-in-bfsi
- policymakers-dialogue-on-ai-policy-evolution-and-the-rule-of-law
- responsible-ai-at-scale-governance-integrity-and-cyber-readiness-for-a-changing-world
- accelerating-indias-ai-growth-a-blueprint-for-indias-ai-success
- ai-beyond-moonshots-a-playbook-for-many