Safe and Trusted Agentic AI: Building Accountability and Inclusion for India and the Global South

Detailed Summary

The moderator called the audience to be seated and explained that the panel was a follow‑up to the AI Safety Summit held in Kerala (December). The earlier summit had produced a safety report that was well‑received by industry, academia and the public sector, and the panel aimed to extend those discussions specifically toward agentic AI—autonomous systems that act with limited human oversight.

A forthcoming AI‑Sovereignty report for India and the Global South was announced, based on an extensive survey and interview process. The moderator indicated that Prof. Krishnashree Achuthan would deliver a brief (≈5 min) overview of the safety report’s key findings.


2. Safety‑Report Highlights – Prof. Krishnashree Achuthan

2.1 Three Core Institutional Pillars (Amrita University)

  1. Compassion‑driven research – technology must be evaluated through the lens of how it affects human lives, especially vulnerable populations.
  2. Education for life – training that equips learners with the skills to understand and shape AI responsibly.
  3. Global impact – ensuring that research outcomes are relevant beyond the local context.

2.2 Six Technical Pillars Examined

PillarDescription
Lifecycle monitoringContinuous observation of AI systems from design to decommission.
Emergent safety / deviant behaviorDetecting and mitigating unexpected or harmful system actions.
SafeguardsBuilt‑in controls to prevent misuse or failure.
Ethical autonomyDefining permissible autonomous decision‑making boundaries.
Recursive governanceApplying governance mechanisms that can adapt as the system evolves.
Regional adaptive governance & human oversightTailoring rules to local cultural, linguistic and infrastructural realities while retaining human‑in‑the‑loop control.

2.3 Sector‑Specific Maturity Gaps

SectorObservations
Banking, Financial Services, Insurance (BFSI)Highest compliance; near‑complete human‑oversight; benefits from RBI’s AI framework.
Education“Sleeping giant”: enthusiasm for AI tutors but almost no safeguards against deviant behaviour; lack of governance mechanisms.
HealthcareHigh variability: some institutions are well‑prepared, but overall risk is highest because of missing escalation protocols and automated safeguards.
Public Services“Blinded sector”: solutions designed for urban contexts are transplanted to rural areas without adaptation, causing poor adoption.

Key Insight: Across all sectors, there is a pronounced maturity gap—the readiness to manage agentic AI safely varies dramatically, with education lagging far behind BFSI.


3. UNESCO Principles & Regional Priorities – Dialogue with Gabriela Ramos

Gabriela asked which UNESCO principles (justice, inclusivity, accountability, governance) are most critical for India’s agentic‑AI ecosystem, given its linguistic (≈15 official languages) and cultural diversity.

3.1 Response (Dr. Abdulrahman Habib)

  1. Transparency & Accountability – essential for preserving the rule of law; AI should not be deployed “for everything”.
  2. Proportionality – AI should be used only when the benefit outweighs the risk.
  3. Legal Personality – The panel debated whether AI systems should ever be granted legal personhood.
    • Japan favors legal personality for care‑economy robots.
    • European nations oppose it, emphasizing human‑rights accountability.
  4. Human Rights Safeguards – Preventing a “responsibility vacuum” where agents could act without clear liability.

3.2 Follow‑up (Gabriela) – Economic Angle

Gabriela shifted to the economic dimension: how AI‑driven trade will shape bilateral relationships. Dr. Habib stressed that sovereignty must be baked into AI deployments—countries need locally‑adapted, culturally‑sensitive AI to avoid “imported” agents that lack contextual understanding.


4. Accountability Across the AI Lifecycle – Multi‑Speaker Discussion

4.1 Who Is Responsible?

A series of questions (raised by the moderator and audience) probed the allocation of responsibility when an agentic system misbehaves.

  • Developers – design‑time responsibilities (ethical coding, bias mitigation).
  • Providers / Deployers – operational responsibilities (ensuring proper configuration, risk assessment).
  • Regulators / Governments – oversight, setting standards, enforcing compliance.

Dr. Habib emphasized a layered accountability model, arguing that no single actor can bear the entire burden, especially for generative agents.

4.2 Audit & Evaluation Frameworks (Dr. Ansgar Koene – inferred)

Although Dr. Koene’s name was not clearly heard, the discussion reflected his area of expertise (AI audits). The panel highlighted:

  • ISO 42001 (AI Management System) – life‑cycle‑oriented standard.
  • IEEE Ethics of Autonomous and Intelligent Systems standards.
  • AI Impact Assessment tools (e.g., Council of Europe’s forthcoming framework).

The consensus: India and the Global South are moving toward a “bottom‑up” approach, creating practical tools for SMEs and large enterprises alike, but gaps remain in translating high‑level principles into actionable check‑lists.


5. Proposed Agentic‑AI Governance Framework – Dr. Krishnashree Achuthan (follow‑up)

Building on the earlier safety‑report pillars, Dr. Achuthan outlined a three‑prong framework:

  1. Dynamic Compliance Oracle – a regulatory API that continuously synchronises evolving rules with the agent’s decision engine.
  2. Federated Watch‑er Agents – secondary agents that monitor primary agents for goal‑drift or policy violations, raising alarms when anomalies are detected.
  3. Compassionate & Constitutional AI – embedding secularism, equity, and justice into the core architecture, ensuring that even after technical compliance the system respects societal values.

6. Sector‑Prioritisation & Outcome‑Based Regulation – Open Discussion

6.1 Which Sector First?

A participant (identified only as “Dr. …”) argued that policy cycles are sector‑driven (ministries push their own agendas). Rather than forcing a hierarchical order, the panel suggested learning‑across‑sectors:

  • Education – a complex testbed because it requires pedagogical redesign, teacher training, and cultural content adaptation.
  • Healthcare – high‑risk but already accustomed to rigorous regulation (e.g., clinical safety).

6.2 Hybrid Regulatory Approach

The panel converged on a hybrid model:

  • Common baseline requirements (risk definitions, emergent‑behaviour controls, transparency metrics) applied across all domains.
  • Sector‑specific add‑ons (clinical safety for health, age‑appropriate safeguards for education).

6.3 India’s “AI Superpower” Strategy

The moderator praised India’s Digital Payments Interface (DPI) as a strategic, sector‑focused pilot that has advanced financial inclusion. Extending this sequencing‑by‑impact approach to other sectors (health, education) was recommended.

6.4 Four “I” s for Sustainable AI (Speaker not clearly identified)

  • Incentives – stimulate private‑sector investment in safe AI.
  • Investments – large‑scale public funding for infrastructure and research.
  • Institutions – create bodies (e.g., AI‑ethics councils) that can issue guidelines and enforce them.
  • Infrastructure – ensure connectivity, data‑governance platforms, and talent pipelines.

7. Gender Inclusion & Women’s Empowerment – Panel Reflections

  • Current Representation – Women constitute roughly 20 % of the global AI workforce; in India, the share of women in engineering is comparatively higher.
  • Trends – Participation is rising, especially in the Global South (e.g., Saudi Arabia reporting >45 % women in ICAIRE teams).
  • Challenges – Need for a pipeline (education → career) and removal of systemic barriers (e.g., glass ceiling).

The panel agreed that gender‑balanced teams improve AI outcomes and that continued advocacy is essential.


8. Announcements & Closing Remarks

  1. Amrita University Bias‑Tracking System – a tool to evaluate large‑language‑model outputs for bias, truthfulness, etc., tailored to Indian use‑cases. Attendees were invited to demo the system.
  2. AI‑Sovereignty Report (India & Global South) – co‑authored with NASCARM, scheduled for release at 3 pm on the following Friday. Panelists were asked to “raise a toast” to the report.
  3. Collaboration with Prof. Ganesh Ramakrishnan (BharatGen, IIT‑Bombay) – acknowledged for joint work on healthcare‑sector agentic AI applications, despite his physical absence from the session.

The moderator thanked the participants, highlighted the importance of knowledge exchange across jurisdictions, and closed the panel.

Key Takeaways

  • Maturity Gap Exists – BFSI shows strong compliance; education lacks safeguards; healthcare shows high variability; public‑service AI often mis‑aligned with rural contexts.
  • UNESCO Principles All Matter – Transparency, accountability, proportionality, and human‑rights safeguards are critical for India’s multilingual, culturally diverse environment.
  • Legal Personality for AI Is Contested – Japan leans toward granting it for care‑robots; Europe opposes citing responsibility dilution; the panel leans toward no legal personality to preserve human accountability.
  • Layered Accountability Model – Responsibility should be distributed among developers, providers/deployers, and regulators, with clear life‑cycle checkpoints.
  • Emerging Standards – ISO 42001, IEEE ethics standards, and AI impact‑assessment frameworks are gaining traction, but practical tooling for SMEs remains a gap.
  • Three‑Prong Governance Blueprint – Dynamic compliance oracle, federated watcher agents, and a compassionate/constitutional AI layer were proposed as a holistic framework.
  • Hybrid Regulatory Approach – Common baseline requirements across sectors plus sector‑specific add‑ons (e.g., clinical safety for health, age‑safety for education).
  • Outcome‑Based Pilot Strategy – India’s DPI demonstrates the value of sector‑focused pilots before scaling; similar sequencing recommended for health and education.
  • Four “I” s for AI Growth – Incentives, Investments, Institutions, and Infrastructure are essential levers for a safe, inclusive AI ecosystem in the Global South.
  • Women’s Participation Is Rising – The Global South (especially Saudi Arabia) shows higher female representation in AI than many Western contexts; continued pipeline development is needed.
  • Concrete Deliverables – Release of a bias‑tracking tool for Indian LLMs and the AI‑Sovereignty report for India & the Global South were announced, underscoring the panel’s commitment to actionable outcomes.

See Also: