Safe and Trusted AI: The Ethics and Governance Perspective

Abstract

The panel examined how safe, trustworthy AI can be realised through intertwined ethical safeguards, accountability mechanisms and governance structures. Participants reflected on practical challenges – from plagiarism‑checking in scholarly publishing and bias in training data, to national policy scaffolding, human‑rights considerations in the Global South, legal professional duties, and the reliability of AI‑driven extreme‑weather forecasts. The discussion highlighted the need for transparent, human‑in‑the‑loop oversight, sector‑specific regulatory lenses, and collaborative international frameworks to ensure inclusive, equitable AI deployment.

Detailed Summary

  • Prof. Sachin Kumar Sharma (DG, RIS) welcomed the audience, thanked the Director General for enabling the session, and introduced the theme: ethics and governance must be treated as a single, inseparable domain for trustworthy AI.
  • Dr. Amit Kumar (RIS) underscored the ubiquity of AI across education, business, research, and health, noting paradoxes such as : students relying on AI for assignments yet scrambling to hide AI usage, and faculty generating reports in minutes only to spend hours “de‑AI‑ifying” them.
  • He highlighted three core concerns: (1) privacy – AI systems can infer personal details from login data; (2) governance – the need for inclusive structures that avoid an “AI‑UPI” model limited to elite users; (3) global‑south coordination – RIS’s MOU network of 157 think‑tanks in 90 countries aims to produce a collective report on inclusive, trusted AI.

2. Ethics of AI in Scientific Publishing – Dr. Geetha Vani Rayasam

  • Scenario described – authors increasingly draft manuscripts with generative AI; editors deploy AI tools for plagiarism and AI‑content detection.
  • Key ethical warnings
    • Data privacy breach – uploading confidential manuscripts to AI services can unintentionally make proprietary or patent‑sensitive information public.
    • Hallucinations & fabricated citations – AI may generate plausible‑looking but fictitious references; human verification is mandatory.
    • Accountability rests with humans – regardless of AI assistance, authors remain liable for plagiarism, copyright infringement, or data‑privacy violations.
  • Do‑and‑don’t checklist (aligned with major publishers)
    • Verify each publisher’s AI‑use policy before submission.
    • Use AI only for editing (e.g., grammar checks) but not for drafting whole sections.
    • Declare AI assistance explicitly; some journals require the exact prompt and highlighted sections.
    • Do not manipulate images or research figures with AI without disclosure.
  • Bias awareness – training data often under‑represents Indian or non‑Western populations, leading to skewed outputs.

3. National AI Governance Framework – Dr. B K Murthy (as “Dr Moorthy”)

  • Historical note – early involvement in India’s first National AI Mission (circa 2018).
  • Four‑step deployment framework
StepCore Elements
1. Define Use‑Case & AudienceIdentify business goal, user group, internal policy alignment, risk assessment.
2. Data ManagementSource, cleanse, store, audit data; ensure consent, privacy, bias mitigation; guard against AI‑generated “synthetic” data contamination.
3. Algorithm Selection & DesignChoose up‑to‑date models (post‑2020) that support explainability; avoid legacy algorithms; conduct thorough testing, bias checks, and transparent logic flows.
4. Deployment & Ongoing GovernanceProduction testing, continuous monitoring, incident‑response team, change‑management, staff training, clear accountability documentation.
  • Risk categorisationhigh‑risk, medium‑risk, and low/​no‑risk AI systems; human‑in‑the‑loop mandatory for all, with full control for high‑risk and verification for medium‑risk.
  • Ethical “Sutras” – eight principles: ethical purpose, social benefit, accountability, transparency/explainability, safety/reliability, open data, fair computation, privacy, plus intellectual‑property considerations.
  • Governance challenges – AI models require continual retraining due to data drift; thus, governance must be a living process rather than a one‑off certification.

4. Human‑Rights & Fairness in the Global South – Dr. Titipol Phakdeewanich

  • Context – Thailand’s Personal Data Protection Act (PDPA, 2019) reflects a hierarchical societal view where authorities often assume rights over individuals’ data.
  • Key challenges
    • Interpretation gap – domestic perception of privacy differs from universal human‑rights standards, creating friction in AI regulation.
    • Cross‑border cyber‑crime – leaks of personal data (e.g., ministries, banks) erode public trust.
    • Civil‑society participation – limited channels for NGOs to scrutinise AI deployments; calls for transparent oversight mechanisms.
  • Recommendation – enforce strict state‑level interpretation of PDPA, open AI‑usage audits to civil society, and embed human‑rights impact assessments before AI rollout.

5. Legal‑Professional Ethics & Liability – Dr. Ravi Srinivas K.

  • Core argument – AI does not absolve professional responsibility; it adds a layer of duty.
  • Professional duty – lawyers must re‑verify AI‑generated arguments, citations, and drafts; AI‑induced “hallucinations” cannot be passed to the client or court.
  • Guidelines emerging globally – examples from the U.S. Bar Association (Rule 5.112) emphasise human oversight and billing adjustments for AI‑assisted work.
  • Vicarious liability – law firms may be held liable for AI tools used by their attorneys; developers are generally not directly liable unless they fail to address known defects (as per EU AI Act precedents).

6. AI for Extreme‑Weather Modelling – Mr. Saurabh Kapil

  • Business focus – BioSky provides AI‑driven forecasts for power‑grid and critical‑infrastructure resilience.
  • Statistical vs. physics‑constrained models – pure statistical models struggle with “black‑swans”; integrating physical constraints dramatically improves accuracy for rare events.
  • Failure modes
    • Data hallucination – synthetic training data can mislead predictions.
    • Insufficient local data – satellite & sensor coverage is sparse in tropical/global‑south regions, causing model bias.
    • Compute & latency – heavy models need real‑time processing to be usable by meteorologists, emergency managers, and farmers.
  • Communication to users – provide probabilistic forecasts (e.g., “80 % chance of ≥ 50 mm rain”) rather than deterministic values; clearly flag uncertainty and model limitations.
  • Fiduciary relationship – lawyer‑client trust is a contract of service, not covered by the Consumer Protection Act.
  • Liability hierarchy
    • Lawyer – primary duty to ensure advice is accurate; must perform human oversight of any AI‑generated content.
    • Law firm – may bear vicarious liability for AI tools deployed by its attorneys.
    • AI developer – generally not liable unless a defect is known and unpatched; EU AI Act stipulates developer responsibility for post‑launch remediation.
  • International reference – American Bar Association’s 2023 guidance (Rule 5.112) stresses billing for human oversight and outlines best‑practice checklists.

8. Legislative Gaps & Sector‑Specific Regulation – Dr. Nupur Chowdhury

  • Current regime – India’s Digital Personal Data Protection Act focuses on scale of operation, not sector‑specific risk.
  • Proposed amendments
    • Sector‑based high‑risk classification (health, finance, employment, etc.) for stricter AI oversight.
    • Explicit duty of care in AI legislation, mirroring the “trust” language of the EU AI Act.
    • Mandatory public disclosure of model architecture, training data provenance, and bias‑mitigation measures to enable auditability.
  • Governance analogy – treat AI deployers akin to financial market participants (SEBI) where confidentiality is balanced with regulatory transparency.

9. Closing & Q&A

  • Moderator thanked the panel, invited a group photograph, and presented tokens of appreciation.
  • Audience interaction – brief follow‑up questions on practical steps for academia (e.g., plagiarism checks) and clarification on liability nuances; panelists reaffirmed the necessity of human oversight and sector‑tailored policy.

Key Takeaways

  • Ethics + Governance are inseparable for trustworthy AI; frameworks must address privacy, bias, and accountability together.
  • Academic publishing: AI may be used for editing but must never draft whole manuscripts; authors remain fully liable for plagiarism, data breaches, and hallucinated citations.
  • Governance model: A four‑step process (use‑case definition → data management → algorithm design → deployment & monitoring) with risk categorisation (high/medium/low) is essential.
  • Human‑rights lens: Strict state interpretation of data‑protection laws and transparent civil‑society oversight are critical in the Global South.
  • Legal profession: AI augments, not replaces, professional duty; lawyers must verify AI outputs, and firms may bear vicarious liability.
  • AI for extreme weather: Physics‑constrained models and hyper‑local data are required to handle black‑swans; uncertainty must be communicated probabilistically.
  • Liability hierarchy: Primary responsibility lies with the human professional; developers are liable only for unaddressed defects after deployment.
  • Regulatory gaps: Current Indian law lacks sector‑specific AI risk definitions and an explicit duty of care; amendments should introduce mandatory disclosures and sector‑based risk tiers.
  • International cooperation – RIS’s network of 157 think‑tanks across 90 countries exemplifies how global‑south collaboration can shape inclusive AI standards.
  • Human‑in‑the‑loop remains the cornerstone across all domains—whether in publishing, governance, legal practice, or extreme‑weather forecasting.

See Also: