ChatGPT Shows Bias Against Resumes Indicating Disability, Study Finds

Recent research reveals that OpenAI’s ChatGPT consistently ranks resumes with disability-related credentials lower than those without such indicators. Graduate student Kate Glazko from the University of Washington observed that AI tools used in hiring can replicate and amplify real-world biases, particularly against disabled individuals.

AI and Automated Resume Screening

Glazko, studying at the Paul G. Allen School of Computer Science & Engineering, found that automated resume screening, a common practice in hiring, often exhibits bias. Her research focused on how AI, like ChatGPT, handles resumes that suggest a disability. The study showed that resumes with credentials like the “Tom Wilson Disability Leadership Award” were ranked lower compared to identical resumes without such honors.

Findings and Implications

The study discovered that ChatGPT’s biases led to resumes with autism leadership awards being described as having “less emphasis on leadership roles,” perpetuating stereotypes about autistic individuals. However, when the AI was customized with instructions to avoid ableist biases, the bias was reduced for five out of six tested disabilities, including deafness, blindness, cerebral palsy, autism, and a general term “disability.” Despite this, only three disabilities saw their resumes ranked higher than those without disability mentions.

Research Methodology

The researchers used a publicly available curriculum vitae (CV) as a control and created six enhanced versions implying different disabilities. Each enhanced CV included four disability-related credentials. These CVs were then ranked by ChatGPT against the control CV for a real job listing. In 60 trials, the enhanced CVs were ranked first only 25% of the time.

Bias Reduction and Challenges

By instructing ChatGPT to avoid ableist language and prioritize disability justice and DEI principles, the researchers saw improved rankings for the enhanced CVs. However, for some disabilities like autism and depression, the improvements were minimal. Glazko emphasized the need for awareness of AI biases in hiring processes and highlighted the persistent nature of these biases even with customization.

Future Directions

The research team, including senior author Jennifer Mankoff, stressed the importance of studying and documenting AI biases. They call for further research on other AI systems like Google’s Gemini and Meta’s Llama, and on intersections of disability biases with other attributes such as gender and race. The goal is to develop AI systems that are equitable and fair.

Organizations such as ourability.com and inclusively.com are working to improve job outcomes for disabled individuals, recognizing the need for inclusive hiring practices. The study’s findings were presented at the 2024 ACM Conference on Fairness, Accountability, and Transparency in Rio de Janeiro.

Funding for this research came from the National Science Foundation, the UW’s Center for Research and Education on Accessible Technology and Experiences (CREATE), and Microsoft.