Power, Protection, and Progress: Legislators and the AI Era

Abstract

The panel explored how artificial intelligence is reshaping law‑making, governance and everyday life for Indian citizens. Topics ranged from the need for a clear “fair‑use” regime in copyright law, to the risk of deep‑fakes eroding public trust, to the digital‑access divide that could create an “intelligence divide”. Speakers debated the appropriate balance between regulation and education, the role of indigenous large‑language models (LLMs) in preserving India’s multilingual heritage, and how public‑private partnerships can provide affordable AI services to the masses. Throughout, the discussion emphasized that legislators must proactively design safeguards while ensuring that AI benefits are inclusive and that India’s cultural knowledge is not lost in the transition to a data‑driven future.

Detailed Summary

Lord Uday Nagaraju opened the session by thanking the organizers and noting the rarity of a parliamentary panel that directly tackles AI. He highlighted a prevailing perception that “politicians don’t know anything about AI” and stressed that the panel itself is evidence of a changing mindset.

“It’s admirable that I can be on a panel with parliamentarians and have such a profound conversation on AI… we are changing the narrative that legislators are technophobic.”

He linked this shift to the broader need for digital readiness among lawmakers, pointing out that without it, policy risks will outpace understanding.

2.1 The Missing Fair‑Use Clause

The first substantive contribution came from Lord Uday Nagaraju, who traced the evolution of “fair use” from the 1950s (journals, newspapers) to the present AI‑driven content ecosystem. He argued that India currently lacks a statutory fair‑use provision, which hampers both creators and innovators:

  • “Innovators, coders, digital content creators are suffering because of a lack of clarity when it comes to fair use.”
  • “If we do not enable fair use, we curtail innovation.”

He referenced Israeli copyright reforms as a potential model, suggesting that India may soon need an amendment to its Copyright Act to codify fair use.

2.2 Balancing IP Rights and Public Benefit

Lord Nagaraju warned that over‑protection of intellectual property could stifle the very ecosystem that AI thrives on. He called for a balanced approach that respects original creators while allowing transformative uses that drive societal progress.

“We must strike a balance—respect the original creator, yet permit fair use so innovation can flourish.”

3. Digital Divide, Access to AI, and Inclusion

3.1 Socio‑Economic Barriers

Shri Lavu Sri Krishna Devarayalu shifted the conversation to accessibility. He recounted experiences in Israel where certain segments of the population (religious groups, the elderly, low‑income households) lack smartphones or internet, creating hurdles in emergency communications. He drew a parallel to India, emphasizing that digital exclusion is a democratic risk:

  • “When a segment of the population cannot receive critical alerts, the government cannot fulfill its duty to protect citizens.”

3.2 Private‑Sector Partnerships

Shri Ujjwal Kumar presented concrete examples of private‑enterprise initiatives that are already widening AI access in India:

  • A major cellular network offers free access to a generative LLM for 50‑60 % of its subscriber base through a partnership with a Silicon‑Valley AI firm.
  • He suggested that government intervention could extend similar free‑access schemes to the remaining population, creating a level playing field.

Kumar also referenced Facebook’s “Free Internet” pilot (circa 2015‑2016) that demonstrated how large‑scale data‑price reductions can be achieved through policy and market forces.

3.3 Indigenous LLMs & Language Preservation

The panel repeatedly returned to the idea of indigenous LLM development.

  • Shri Ujjwal Kumar argued that simply providing foreign‑origin LLMs does not solve the problem of language bias. He advocated for building India‑specific LLMs that can handle the vast multilingual landscape (22‑24 constitutionally recognized languages plus numerous dialects).
  • Shri Raghav Chadha added that infrastructure constraints (India currently has roughly 34 000 GPUs for AI research) must be addressed to support domestic model training.

Shri Lavu Sri Krishna Devarayalu highlighted a concrete risk: “Every 14 days one language ‘vanishes’, taking with it the collective knowledge of that community.” He called for national infrastructure to digitise and safeguard oral traditions, folklore, and regional expertise before they disappear.

4. Deepfakes, Misinformation, and Democratic Trust

4.1 Real‑World Illustration

Lord Uday Nagaraju narrated a personal anecdote: while on a morning walk, an elderly woman dismissed everything she saw on her phone as “AI‑generated fake” and expressed profound distrust. He used this to illustrate a broader trust crisis that deepfakes could exacerbate:

  • “If citizens cannot trust what they see, democracy erodes.”

4.2 The “Intelligence Divide”

He warned against an emerging “intelligence divide” where only a small elite (≈ 10 %) can afford advanced AI tools, potentially giving them disproportionate influence over public discourse.

  • “The rest of the population—90 %—may be left out, leading to a concentration of power and manipulation.”

4.3 Panel Consensus on Mitigation

  • Shri Raghav Chadha emphasized the need for clear accountability, safety standards, and a regulatory framework that anticipates deepfake threats.
  • Sarith Felber stressed that authentication mechanisms (digital signatures, provenance metadata) are essential to restore confidence.

5. Regulation vs. Education

5.1 Limits of Regulation

Sarith Felber argued that regulation alone cannot solve the AI challenge:

  • “Regulation cannot replace education for critical thinking and digital literacy.”
  • She compared AI governance to freedom of speech under Article 19 of the Indian Constitution—certain safeguards are needed, but over‑regulation can choke innovation.

5.2 European AI Act as a Reference

She contrasted India’s situation with the European Union’s AI Act (adopted 2019), noting that the EU chose a technology‑agnostic legal framework that governs “AI” as a whole rather than specific applications. She argued that India should wait until the technology matures before imposing blanket rules.

  • “We need to regulate what we want to regulate—transparency, safety in high‑risk domains—when the technology is ready.”

5.3 Timing and Scope

Both Sarith Felber and Shri Raghav Chadha agreed that timing is critical:

  • Early regulation can stifle promising research, while delayed action can leave the public exposed to harms.
  • A “sweet spot” exists where the technology is mature enough for oversight but not yet entrenched.

6. Knowledge Displacement & AI‑Powered Cultural Preservation

6.1 Oral Traditions at Risk

Shri Ujjwal Kumar highlighted that AI models trained solely on digitised text will ignore vast swaths of oral knowledge, leading to knowledge displacement. He stressed the urgency of programmes like India’s “Bhashini” AI Mission, which aims to digitise regional languages and folk knowledge.

6.2 Role of Government & Academia

He urged government‑backed infrastructure and academic collaborations (e.g., partnerships between universities and AI firms) to develop multilingual LLMs that incorporate indigenous datasets.

6.3 Comparative Insight from Israel

When asked about language preservation, Shri Lavu Sri Krishna Devarayalu shared Israel’s approach: supporting Hebrew‑focused LLM projects so that the language’s corpus becomes part of global AI models. He suggested that India could adopt a similar strategy, funding startups that train LLMs on Indian languages.

7. Concluding Remarks & Future Directions

The session wrapped up with a brief consensus:

  • Inclusivity – AI must be accessible to every citizen, regardless of geography or socio‑economic status.
  • Balanced Governance – Regulation, education, and industry collaboration must work in tandem.
  • Cultural Safeguarding – Proactive digitisation of linguistic and oral heritage is essential to avoid knowledge loss.

Lord Uday Nagaraju thanked the audience and invited the Future Shift Labs Director to continue facilitating dialogue between legislators and the AI community.

Key Takeaways

  • Fair‑use legislation is urgently needed in India to protect creators while allowing AI‑driven innovation; Israeli reforms are a useful reference.
  • Digital exclusion creates an “intelligence divide” that can undermine democratic equality; public‑private partnerships (e.g., free LLM access via telecom providers) are a practical mitigation.
  • Deepfakes threaten public trust; authentication tools and clear legal accountability are required to preserve democratic discourse.
  • Regulation must be targeted and timed – blanket bans risk stifling growth, while delayed oversight leaves citizens vulnerable. Education in digital literacy is equally critical.
  • Indigenous LLM development is essential for multilingual inclusion and to prevent language‑bias in AI outputs; India should invest in domestic model training and data infrastructure.
  • Preserving oral and regional knowledge through digitisation (e.g., Bhashini) is vital to avoid cultural erosion as AI becomes dominant.
  • Collaboration between government, academia, and industry is the most effective route to build the necessary AI ecosystem that serves all citizens.

See Also: