AI for Disability Inclusion: Global Lessons on Accessibility by Design
Abstract
The session opened with a framing address that highlighted the scale of disability in India (≈200 million people by WHO estimates) and warned of algorithmic exclusion risks (e.g., dysarthric speech not being recognised, bias in hiring AI). The speaker stressed the need for mandatory, enforce‑able accessibility standards and for AI solutions that are affordable, scalable, and co‑designed with disabled users.
Prof. Alison Noble then presented the Royal Society’s 2023 report on inclusive digital assistive technology, outlining the evidence‑gathering methodology (≈800 survey respondents, 100 experts, case studies from the UK, US, India, Kenya) and the three‑pillar framework Include – Access – Improve. Key recommendations centred on functional‑disability data, recognising smartphones as assistive tools, and embedding co‑design from day 1.
The moderated panel featured four speakers—Andrew Fleming, Agastya Mehta, Amar Jain, and Tukaram Mundhe—who answered preset questions on early‑education AI, privacy vs. accessibility, AI bias, real‑world empowerment of visually‑disabled lawyers, and state‑level governance (Maharashtra’s “Divyanga Sahag” portal). The discussion also fielded audience questions on rural employment and multilingual‑multimodal AI. The session closed with announcements about the newly launched Accessibility Collagen (TAC) alliance and a call for disability‑certified procurement.
Detailed Summary
| Speaker | Key Points |
|---|---|
| Nipun Malhotra (moderator) | - Welcomed participants and underscored the partnership between the Royal Society, the Indian Department of PwDs, and The Quantum Hub. - Cited WHO prevalence (~16 % → ~200 million Indians) versus the 2011 Indian census figure (26.8 million) to illustrate systematic under‑counting and invisibility. - Highlighted global assistive‑technology need: 3.5 billion people projected to require assistive tech by 2050. |
| Nipun Malhotra (continuing) | - Stated the central question: Can AI be designed, deployed, and scaled to reach the most‑needed users, from metros to remote villages (Latour, Siliguri, Sambalpur)? - Asserted India’s dual role as consumer and architect of AI (India AI Mission). |
| Nipun Malhotra (risk warning) | - Introduced “algorithmic exclusion”: • Voice‑recognition systems failing on dysarthric speech. • Hiring algorithms misclassifying neuro‑divergent candidates as unfocused due to atypical eye‑movement patterns. - Emphasised that such failures automate discrimination. |
| Nipun Malhotra (policy side) | - Announced forthcoming mandatory, non‑negotiable ICT accessibility standards (website, portal, app). - Cited the “Rajiv Rattori” case as a precedent for enforceable guidelines. |
| Nipun Malhotra (call to innovators) | - Urged AI entrepreneurs to create economical, resilient, and widely‑distributed solutions. - Listed existing collaborations: • Strategic partnership with the Assistive Tech Foundation (Bangalore). • NIPID’s AI‑enhanced adaptive IQ screening tool. • Start‑ups such as Annie (Tinker Labs) and Trester Labs (Kibo). |
| Nipun Malhotra (real‑world examples) | - Described a visually‑impaired villager in Kerala accessing government services via voice in the native language. - Described a hearing‑impaired farmer in Telangana using real‑time Telugu captions on a ₹3,000‑5,000 smartphone. - Compared UK disability‑related spend (£1,000 per month) with Indian affordability constraints, urging cost‑effective design. |
| Nipun Malhotra (closing) | - Thanked the audience and set the stage for the panel discussion. |
Announcement: The session would be followed by a short group photo and a technical presentation by Prof. Alison Noble.
2. [Transition] – Photo Arrangement
A brief, informal exchange among the organizers (Royal Society, TQH, panelists) to arrange a group photograph. No substantive content beyond logistics.
3. Technical Presentation – “Inclusive Development & Adoption of Digital Assistive Technologies”
| Speaker | Content Summary |
|---|---|
| Prof. Alison Noble (Royal Society) | - Introduced the Royal Society (UK’s national academy of science) and referenced Stephen Hawking as a historic disabled scientist. - Presented the Executive Summary of the Society’s 2023 report (QR code displayed). - Scope of the Report: Inclusive development of digital assistive technologies across work, play, rest, and care. - Global Context: ~1.3 billion people worldwide experience disability; AI is a “real opportunity” but faces barriers: high cost, limited availability, low awareness, lack of trained personnel, inadequate financing. |
| Methodology | - International expert committee (≈100 experts). - Survey of ~800 respondents + focus groups. - Public‑survey of 100 + British citizens. - Commissioned studies on disability data and “small data” for AI. - Comparative case studies (UK, US, India, Kenya). |
| High‑Level Findings (Include‑Access‑Improve) | 1. Include – Technologies must be co‑designed with end‑users. 2. Access – Affordable digital assistive tech and robust digital infrastructure are essential. 3. Improve – Disability data must be high‑quality, representative, moving from narrow identity‑based definitions to functional‑challenge descriptors. |
| Key Recommendations | - Transform disability‑data collection: shift to functional definitions that capture lived challenges; better data → better AI & policy. ‑ Recognise smartphones as assistive technology (currently excluded from formal assistive‑tech lists). ‑ Embed co‑design from day‑one: disabled persons involved at every development stage. |
| Illustrative Quote | > “Putting a blindfold on does not give you intuition about what works for a blind person.” – Dr. Vince Cerf (engineer on the report). |
| Video Montage | - Short compiled clips of disabled users sharing personal experiences: • Sophie Bay using eye‑tracking to type. • Sahira Khan (freelance artist) describing barriers when internet or device repair is unavailable. ‑ Emphasised the “nothing‑solution‑only‑people” principle: technology must be designed with disabled people, not for them. |
| Closing | - Thanked the audience and handed the floor to moderator Nipun Malhotra to introduce the multi‑stakeholder panel. |
4. Multi‑Stakeholder Panel – Structured Q&A
The moderator announced a three‑minute response format per panelist, with a timer visible to the audience. The four panelists were: Andrew Fleming (UK), Agastya Mehta (Meta), Amar Jain (Mission Accessibility), and Tukaram Mundhe (Maharashtra). Two pre‑selected questions were posed to each panelist, followed by audience questions.
4.1. Andrew Fleming (British High Commissioner) – “AI in Early Education (UK Experience)”
| Question | Fleming’s Answer |
|---|---|
| Q1 – Early‑education AI interventions for learners with dyslexia/dyspraxia | - Recalled personal experience of being excluded from school support in the 1970s. ‑ Described current UK tools: adaptive learning platforms, speech‑to‑text, real‑time feedback that personalise content for children with SEND (Special Educational Needs & Disabilities). ‑ Cited his own child (dyslexic + ADHD) benefiting from AI‑driven reading aides and organisational tools, leading to greater independence and confidence. |
| Q2 – Remaining gaps & responsible failure | - Acknowledged that implementation is not perfect: scalability, fairness, and inclusion of lived‑experience feedback remain challenges. ‑ Emphasised the principle of “responsible failure” – piloting and learning from setbacks is essential when vulnerable groups are affected. |
| Key Insight | AI can augment traditional special‑education support, but continuous co‑design with disabled learners is mandatory. |
4.2. Agastya Mehta (Meta) – “Balancing Privacy & Accessibility; Tackling AI Bias”
| Question | Mehta’s Answer |
|---|---|
| Q1 – Privacy vs. accessibility | - Framed the issue as a societal trade‑off: a hypothetical world where everyone is blind and a vision‑restoring technology is introduced would raise privacy concerns. ‑ Argued that, like the evolution from cell‑phone cameras to ubiquitous recording, society can eventually adapt to new assistive capabilities. ‑ Warned that privacy‑related behavioural changes (e.g., reduced public dancing) already occur; the balance will be context‑specific. |
| Q2 – AI bias & inclusion of disabled data | - Reiterated the “nothing‑without‑us” mantra: disabled people must be part of the data‑collection and model‑training pipeline. ‑ Stressed that training data determines model quality; inclusive data yields products that benefit all users (the “curb‑cut effect”). ‑ Highlighted that bias mitigation is economically sensible, not just a PR exercise. |
| Key Insight | Privacy concerns can be managed through societal adaptation and clear governance, while bias is fundamentally a data problem that is solved by inclusive co‑design. |
4.3. Amar Jain (Mission Accessibility) – “AI as Empowerment for Visually‑Disabled Professionals”
| Question | Jain’s Answer |
|---|---|
| AI empowerment | - Described four AI‑enabled capabilities for a visually‑disabled lawyer: 1. Access to information – reading handwritten documents, screenshot‑based text extraction, screen‑share transcription. 2. Professional competitiveness – AI tools level the playing field in legal practice. 3. Environmental awareness – AI reads signboards, identifies objects, assists navigation. 4. Personal autonomy – e.g., scanning a pregnancy‑test result for privacy. |
| Current challenges | - Data gaps: Lack of inclusive training datasets leads to discriminatory outcomes (e.g., facial verification in e‑KYC failing for blind users). ‑ Legal enforcement: Strong disability‑rights laws exist, but enforcement mechanisms are weak. |
| Recommended way forward | - Mandate accessibility audits and conformance reports in procurement contracts. ‑ Require disability‑inclusive testing during product development. ‑ Institutionalise co‑design and participatory testing. |
| Key Insight | AI can dramatically expand professional and personal agency for disabled individuals, but legal enforceability and inclusive data are the bottlenecks. |
4.4. Tukaram Mundhe (Maharashtra) – “State‑Level AI Governance & the ‘Divyanga Sahag’ Portal”
| Question | Mundhe’s Answer |
|---|---|
| AI‑driven governance | - Described the Divyanga Sahag portal ( https://divyangasahagportal.maharashtra.gov.in ) – a single‑stop digital gateway for all disability‑related schemes. ‑ Workflow: User creates a profile → system auto‑profiles using existing governmental data (e.g., UDID) via APIs; the portal instantly lists eligible schemes and enables one‑click applications. ‑ Data aggregation eliminates redundant data entry and speeds up service delivery. |
| Implementation challenges | - Affordability & accessibility: Rural users need low‑cost smartphones and reliable internet. ‑ Legal enforcement: Accessibility standards exist but are hard to implement across public portals. |
| AI‑enabled grievance redressal | - Launched an AI‑driven chatbot (Nov 2023). ‑ Received >5,000 grievances, of which ≈5,200 have been resolved in an average 5 days. ‑ Analytics reveal most grievances relate to discrimination and lack of accessibility. |
| Scalability vision | - Plans to expand the portal as a “one‑stop solution” for interaction between persons with disabilities, NGOs, and government agencies. |
| Key Insight | State‑level AI platforms can centralise benefit‑eligibility data, dramatically reduce friction for disabled citizens, but must be paired with affordable device access and robust enforcement of accessibility standards. |
4.5. Audience Questions
| Question | Respondent(s) | Summary of Answers |
|---|---|---|
| Q1 (Nippon) – Rural employment & AI | Tukaram Mundhe (brief) | Emphasised that AI‑enabled portals can surface employment schemes and up‑skilling opportunities; however, digital divide (lack of smartphones/internet) remains the primary barrier. |
| Q2 – Multilingual & multimodal AI (design perspective) | Agastya Mehta (brief) | Stated that multimodal authentication (voice, facial, gesture) is being incorporated globally to serve diverse ability groups; multilingual models can reduce cognitive load when switching modalities. |
| Q3 – AI & labor‑market inclusion (personal anecdote) | Unnamed speaker (likely a participant) | Shared a personal story: a blind mother lacking formal qualifications due to inaccessible education; argued AI could bridge artificial barriers and enable such individuals to compete in the labour market. |
| Q4 – Legal‑certified procurement | Amar Jain (closing remarks) | Suggested a “disability‑certified product” label (similar to safety certifications) for public procurement; advocated for “disability impact assessment” as a mandatory step. |
5. Closing Announcements
| Announcer | Announcement |
|---|---|
| Nipun Malhotra (moderator) | - Introduced the Accessibility Collagen (TAC) – an alliance of 20+ NGOs plus private‑sector partners, focused initially on technology. ‑ Unveiled the logo (description for visually‑impaired attendees). ‑ Appointed George Abraham as the TAC President. |
| Nipun Malhotra (final) | - Called for disability‑certified procurement and broader stakeholder collaboration. ‑ Thanked sign‑language interpreters (ISLRTC) and all participants. ‑ Directed speakers to exit the hall before the next session. |
Key Takeaways
- Scale of Need: WHO estimates ≈200 million Indians live with significant disabilities; ≈3 % have access to assistive products—a massive accessibility gap that will widen by 2050 (3.5 billion people globally needing assistive tech).
- Algorithmic Exclusion Exists: Current AI systems can unintentionally discriminate (e.g., dysarthric speech, neuro‑divergent eye‑movement patterns) – this automates bias unless addressed early.
- Mandated Standards Required: India is moving toward mandatory, enforceable ICT accessibility standards; enforcement mechanisms (audits, conformance reports) are essential.
- Inclusive Design Principles (Royal Society Report): Include (co‑design with disabled users); Access (affordable, widespread digital infrastructure); Improve (high‑quality, functional disability data).
- Smartphones as Assistive Tools: Policy recommendation to recognise smartphones as formal assistive technology to unlock low‑cost, ubiquitous solutions.
- Co‑Design is a Competitive Advantage: Involving disabled persons from data collection to model training reduces bias and creates the “curb‑cut effect,” benefiting all users.
- State‑Level AI Success (Maharashtra): The Divyanga Sahag portal demonstrates how AI can auto‑profile users, aggregate scheme eligibility, and resolve >5,000 grievances in ~5 days, illustrating a scalable governance model.
- Economic Viability: AI solutions must be economical (≤ ₹5,000) to be adoptable in low‑income settings; cost‑effectiveness is a prerequisite for national impact.
- Employment Pathways: AI can remove artificial barriers in education and hiring, offering new employment opportunities for disabled individuals, especially in rural areas where 70 % of the disabled population resides.
- Multilingual & Multimodal Future: Combining language and modality (voice, vision, gesture) can reduce cognitive load and increase accessibility for diverse disability groups.
- Collaboration Platform (Accessibility Collagen): The newly launched TAC provides a cross‑disability alliance to steer policy, standards, and industry collaboration, with an emphasis on technology.
End of summary.
See Also: