Home HR Tech Risks of Using AI for Candidate Screening

Risks of Using AI for Candidate Screening

Job interview candidates

by Zachary Amos, Features Editor at ReHack.com

Hiring decisions are weighty — and AI has changed the game. Tools that promise speed, scale and efficiency are tempting, but they also introduce risks HR professionals can’t afford to overlook. Ethical use of AI in recruitment requires understanding the legal landscape, including the Employment Equity Acts, GDPR and the forthcoming EU AI Act.

Technology can screen hundreds of applications in minutes, match keywords to skills and even rank candidates, but speed shouldn’t come at the cost of fairness. Employers must approach AI recruitment carefully, keeping transparency, accountability and compliance front and center.

1. Algorithmic Bias and Unfair Outcomes

One of the most significant risks is that AI systems inherit biases from training. The AI tool may reinforce that pattern if historical recruitment data reflects inequality.

  • Favoritism: A recent study revealed that an AI resume tool favored people based on names and gender. It chose white-sounding names 85% of the time and men 52% of the time.
  • AI legality: The Employment Equality Acts primarily govern AI for candidate screening. There should be no discrimination against gender, race, age and disability.

Professionals can reduce the risk by auditing AI tools. However, some providers may be reluctant to share their training data, making it difficult to prove compliance.

2. Lack of Transparency and Explainability

Most AI systems operate as “black boxes,” meaning their decision-making processes are not transparent. They produce outcomes, shortlists, rankings or rejections without explaining how they reached that conclusion. The lack of transparency can make it challenging for HR professionals to understand candidate evaluation and selection.

The EU AI Act places AI systems into four classifications — unacceptable, high, limited and minimal risk. AI systems related to human resources fall into the high-risk category. Most employers are AI deployers responsible for overseeing their tasks, ensuring they input relevant data and complying with data requests to explain how AI affects people.

Others are AI providers who create and build the AI tool. Providers have more responsibility than employers because they program the tool and biases reflect on them. Job applicants can ask for feedback and meaningful information about automated decision-making. The company may fall short of legal obligations if the AI can’t provide distinct reasons for rejecting someone.

Because AI is high-risk, employers must explain relevant decisions, keep detailed documentation and provide human oversight. Without clear insights into how AI makes decisions, organisations may struggle to justify hiring choices, leading to potential legal and ethical challenges and hiring candidates who aren’t suitable for the position.

3. Reliance on Technology

AI is attractive to businesses because of its speed — 91% of executives upscale AI initiatives globally, and 74% of employers use it for a quarter of their work. It can process several resumes in seconds, flag keywords and analyse video interviews, but speed doesn’t equal quality. AI screenings often reward candidates who can “game” the system with tactics like stuffing their resumes with specific keywords.

While most HR professionals believe AI enhances company success, the tools might overlook ideal candidates whose CVs don’t match the AI’s template. That’s especially concerning, since jobs that rely on soft skills like communication, problem-solving, adaptability and resilience will make up an estimated two-thirds of the workforce by 2030.

Because AI algorithms often prioritise hard credentials and keyword matches, they may miss the qualities that drive collaboration and long-term growth. Businesses risk losing out on high-potential hires whose strengths aren’t easy to quantify.

4. Candidate Experience

Recruitment is equal parts qualifications, relationships and reputation. Candidates who know a machine is evaluating them may disengage from the process. Studies show that candidates in AI-led interviews tend to emphasise their analytical traits while downplaying the emotional intelligence, flexibility and collaboration that are increasingly vital in today’s workforce. This shift can skew results, as AI may prioritise responses that align with its programmed expectations instead of those that reflect a candidate’s personality.

Some candidates feel an AI interview removes the human touch and makes them feel unappreciated. The AI tool asks them about their resume and technical proficiencies, but its responses to questions about company culture are brief and impersonal.

However, HR professionals argue that AI assistants help them manage their time and enhance the recruitment process by filtering top talent, freeing them to focus on other tasks until candidates reach the in-depth second or third rounds.

According to the World Economic Forum, 88% of companies use AI to screen candidates — and these systems have evolved to better recognise soft skills. Instead of relying solely on traditional filters like education and experience, newer AI tools assess candidates’ competencies through conversational analysis.

AI Can Support but Not Replace Human Oversight in Candidate Screening

While AI candidate screening has undeniable advantages, it also has genuine risks. HR professionals should use AI as a supplementary tool, not a final decision-maker, ensuring humans review every shortlist and hiring decision instead of leaving it entirely to algorithms.

Employers must be honest with candidates about how they use AI, with regular audits to identify biases. The organisations that thrive in this landscape will balance technological efficiency with ethical responsibility, ensuring fairness, trust and compliance remain central to the recruitment process.

About the author

Zachary Amos is the Features Editor at ReHack.com, where he focuses on a plethora of trending technology topics such as cybersecurity, artificial intelligence, HR tech, and health IT. Some of the publications that have featured his tech insights include VentureBeat, ReadWrite, ISAGCA, Unite.AI, and HR.com, as well as numerous others. Zachary also produces tech content for a variety of other publications, including Forbes, HIT Consultant, and TalentCulture

Exit mobile version