Is artificial intelligence streamlining the hiring process, or is it just automating old biases in new ways? As employers increasingly turn to artificial intelligence for recruitment and hiring processes, researchers are raising important questions about the technology’s effectiveness in creating fair opportunities for all candidates. A recent study by Theresa Fister and George K. Thiruvathukal of Loyola University Chicago explores the human experience of AI job applications and investigates potential biases in these systems.
The Promise vs. Reality of AI Hiring
Many companies have embraced AI hiring technologies, with the belief that algorithms provide a more objective way to evaluate candidates than human interviewers. However, research suggests this assumption of algorithmic neutrality deserves scrutiny.
AI hiring systems, while appearing impartial, are created by humans and trained on human-generated data. As a result, these systems can inherit and even amplify existing biases. The algorithms “learn from patterns,” and if those patterns reflect historical discrimination, the AI may perpetuate these problems.
Examples of AI hiring bias have already emerged:
The Human Experience of AI Interviews
The Loyola University Chicago study gathered feedback from 25 participants across 12 industries who had experienced AI-mediated job interviews. The results revealed significant insights about candidate perceptions.
While most participants (67%) reported being able to access and complete the interview software, their comfort levels varied considerably. Only 44% agreed they felt comfortable in the interview atmosphere, while 33% disagreed.
Perhaps most telling: when asked if they would choose AI interviews over traditional in-person interviews in the future, a resounding 67% strongly disagreed. No participants preferred the AI option.
Key Concerns from Candidates
... continue reading