© iStock/ gorodenkoff
This article explores how artificial intelligence can both empower and disadvantage persons with disabilities in employment. It highlights the ILO Global Business and Disability Network’s efforts to promote responsible, inclusive AI that enhances accessibility while preventing bias and discrimination in the workplace.
While the use of AI by individuals with disabilities, e.g. text-to-speech or real-time captioning, tends to be positive, institutional use of AI-powered human resources technology, e.g. filtering out persons with “non-standard” faces or speech patterns, tends to disadvantage and exclude job seekers and workers with disabilities. As a contribution to creating fair and efficient labour markets, AI developers and buyers of AI-powered employment tools need to address disability-based discrimination and its impacts on an organisation’s talent pool, workforce and customers. AI-related regulation needs to catch up on the existing disability equality gap and ensure AI tools are applied in a non-discriminatory way before they reach consumers, which includes both human resources departments of employers and consumers with disabilities.
Employment of people with disabilities in the global digital economy
There are an estimated 1.3 billion people with disabilities in the world, with most of working age. However, seven in ten persons with disabilities are outside the labour force, compared to four in ten persons without disabilities. Intersecting dimensions, such as gender, can further aggravate the disadvantage faced by persons with disabilities, including in the labour market.[1]
The rise of the digital economy provides opportunities to narrow this employment gap. Promoting the accessibility of digital platforms and work tools, as well as inclusive digital skills training are key in this endeavour.[2] At the same time, the use of AI is not specific to the digital economy and offers both opportunities and barriers for persons with disabilities who are looking for a job, those who are already employed and people in employment who will acquire their disabilities in the future while employed.
AI as an enabler: creating accessible workplaces
AI is taking Assistive Technologies to a new level by improving functionality and customization.[3] For instance, for individuals with hearing, speech, or visual impairments, AI-powered tools such as real-time captioning, speech-to-text, and sign language interpretation can bridge otherwise inaccessible communication channels. For those with cognitive disabilities, generative AI can act as a crucial support system, summarizing complex information and breaking down large tasks into smaller, more manageable steps to help with organisation and time management.[4]
AI also has the potential to mitigate unconscious human bias in hiring. By training systems to focus exclusively on skills and qualifications, companies could reduce human predispositions related to disability during the initial screening process. Some AI tools can anonymize candidate information, helping to ensure evaluations are based purely on job-related criteria. Companies that use tools intentionally and expertly designed to promote inclusive and non-discriminatory AI for hiring can also gain a competitive advantage by accessing a broader talent pool.
The risks: bias, opacity, and new barriers
AI introduces significant risks if deployed without rigorous oversight. AI can perpetuate historical biases and create new forms of exclusion, with serious legal and ethical consequences.[5]
AI systems trained on historical hiring data are likely to learn to replicate and amplify existing patterns of exclusion and unfair treatment. Due to high unemployment rates for persons with disabilities, it is likely that employment-related data sets used by AI underrepresent persons with disabilities. AI may learn that this is a "desirable" trend, reinforcing a bias against job applicants with disabilities.[6] Additionally, AI can introduce new forms of disability-based discrimination.[7] A recent study found that AI résumé screeners ranked CVs with disability-related awards lower than identical résumés without such information.[8] Similarly, AI video tools that measure non-verbal cues like eye contact and vocal cadence can discriminate against candidates with disabilities, including persons with facial paralysis, hearing loss, or neurodivergent individuals, by misreading these behaviours as signs of disengagement or incapacity.[9]
AI regulations like the EU AI Act have started requiring accessibility standards for "high-risk" AI systems. The EU AI Act recognises the risk of discrimination against persons with disabilities and points to the risks when AI is used in recruitment processes and performance monitoring.[10] In the area of performance monitoring, AI-powered tracking systems can adhere to rigid schedules, which disadvantage employees who need flexibility or adjustments, including due to their disability.
A framework for responsible and disability-inclusive AI
The ILO Global Business and Disability Network (GBDN)[11] began to address the impact of AI on the employment of persons with disabilities back in 2019. Sessions at recent ILO GBDN’s global conferences (2024 “Inclusiveness in a digital economy: Avatars, accessibility, and Artificial Intelligence”; 2023 “Artificial Intelligence and the future of technologies: Impact and opportunities”) and its virtual events like “How do you lip-read a robot? AI-powered HR technology has a disability problem” aim to raise awareness about AI’s promises as well as the risks to both employers and persons with disabilities.
In partnership with initiatives like “Disability Ethical? AI” and the “Equitable AI Alliance”, the ILO Global Business and Disability Network is working towards changing businesses’ legal, ethical, and cultural approach on the use of AI for the recruitment and employment of persons with disabilities. Key recommendations include:
- Prioritize ethical and non-discriminatory design and development: AI developers should actively involve persons with diverse disabilities, their representative organizations, and accessibility experts in the entire AI design, development, and testing lifecycle.[12] This would help to ensure that tools are inherently unbiased, accessible and accurately assess candidates’ potential.
- Implement responsible deployment and governance: AI should augment, not replace, human decision-making. Employers should ensure meaningful human oversight and review of all AI-generated employment decisions, including those affecting individuals with disabilities. Clear accountability should be established for AI system performance and any discriminatory outcomes that may arise.
- Foster an Inclusive Culture: Companies should embed ethical AI considerations into their broader disability inclusion strategies. This includes providing training for human resources professionals and managers on the ethical use of AI, while also promoting digital literacy for employees with disabilities.
- Strengthen AI-related Laws and Public Policies: AI-related regulations need to respond to the biases and barriers that exist when AI systems are used to assess persons with disabilities, including in recruitment and employment. Accessibility and inclusivity should be key in the development of these AI tools, so employers as buyers of such tools also don’t risk engaging in disability-based discrimination.
[1] ILO: Disability Labour Market Indicators (DLMI database)
[2] ILO, 2021: An inclusive digital economy for people with disabilities
[3] Digital Learning Institute: Revolutionising Accessibility: The Role of AI in Assistive Technology
[4] World Business Council for Sustainable Development (WBCSD), 2020: Empowering people with disabilities through AI
[5] Special Rapporteur on the rights of persons with disabilities, 2021: Artificial intelligence and the rights of persons with disabilities - Report of the Special Rapporteur on the rights of persons with disabilities
[6] Warden AI, 2025: Disability Bias in AI: How and Why to Audit
[7] The Institute for Ethical AI, 2020: Recruitment AI has a Disability Problem: Questions Employers Should be Asking to Ensure Fairness in Recruitment
[8] University of Washington, 2024: Identifying and Improving Disability Bias in GPT-Based Resume Screening
[9] Bloomberg Law, 2025: AI Hiring Tools Elevate Bias Danger for Autistic Job Applicants
[10] EU Artificial Intelligence Act
[11] The ILO Global Business and Disability Network brings together more than 40 multinational enterprises, more than 45 National Business and Disability Networks, and 7 non-business members like the International Disability Alliance (IDA).
[12] Nordic Welfare Center, 2025: AI for all – inclusive technology is a collective responsibility
