Addressing Candidate Concerns about AI and the Hiring Process

by Richard T. Justenhoven, Director Global Products Aon Assessment Solutions at Aon

Applicants aren’t sold on using AI in the selection process. Here’s what organizations can do to overcome their concerns.

Many employers are intrigued by the idea of using artificial intelligence to improve their hiring processes, but candidates remain a good bit more wary.

- Advertisement -

My colleagues, John Capman, Manuel Gonzalez, Achim Preuss and I recently conducted an online experiment to examine whether people would react differently to an AI versus a human decision-maker, and whether these reactions depend on whether a job offer is given at the end of process. What we found suggests that people react unfavourably both to AI and the organizations that use them, and that the experience taints people’s impressions of the employer even if they are offered a job.

So how can employers help candidates feel more comfortable with their hiring process? Here are steps you can take to address concerns about AI and increase candidates’ trust in the talent assessment process.

Focus on the Human Touch
Hiring managers should take care to maintain a sense of human connection during the hiring process and allow for human connection to keep the process from feeling too impersonal.

Specifically, while AI operates autonomously, people mistakenly interpret this autonomy as an indication that humans can’t fully control the system. The data suggest that these concerns might lead people to react unfavourably toward AI and the organizations that use it for selection purposes.

Applicant reactions appear to be based primarily on concerns about impersonal treatment rather than about the fairness of the selection process. Applicants who reacted negatively to AI also had several unpleasant emotions about the organization — namely anger, skepticism and reduced hope and excitement during the selection process.

Concerns about AI can even potentially spill over after a job offer is made. The selection process is often one’s first encounter with the organization, and feelings of a lack of human interaction or impersonal treatment during the process can shape one’s expectations of organizational behaviour.

These feelings could have lingering effects on the candidate even after the selection process. That’s why it’s important for companies to ensure that candidates are interacting with people, not just machines. That human contact helps assuage their concerns about the role AI plays in the hiring process and helps them trust that humans–not machines–are making the decisions.

Emphasize Education and Consent
Education can curb candidate anxiety about the use of AI in talent assessment. Organizations can instruct applicants on how the AI works, explain how and why they’re using AI, and describe the benefits of this approach, such as a better role and organizational fit. Increasing the knowledge of their applicant pool about the role AI plays in their hiring process allows candidates to feel more comfortable with the process.

The need for education is particularly critical in the face of increasing calls for transparency regarding how organizations use their applicants’ data. Organizations are already required by law to obtain consent from job applicants in the European Union if they wish to apply automated decision-making strategies.

Educating applicants and providing greater transparency during the consent process can ease concerns about AI. Aon’s research suggests that providing such explanations can lead people to react favourably during selection processes by making them feel more informed and respected by the organization.

Embrace Radical Transparency
AI in talent assessment can’t be a “black box.” Companies must be transparent with applicants about their use of AI in the hiring process. A “glass-box” AI in which all stakeholders understand what’s being measured and how those measurements are used makes education easier, reduces legal and regulatory risks, and reduces applicants’ anxieties about the talent assessment process.

Candidates who are more familiar with AI are generally more accepting of its use. We found that participants who were highly familiar with AI were equally as trusting of an AI decision-maker as a human decision-maker. That means their reactions to the selection decision were not affected by whether the decision-maker was an AI or a human.

As a result, these applicants may have fewer concerns regarding transparency, our research suggests. As people understand how much human involvement goes into designing and applying AI, they may have fewer concerns than someone who has little knowledge about AI.

Organizations that use AI in hiring need to bolster the amount of interpersonal contact with and respectful treatment of applicants during the selection process. Even if an AI system is used to automate the decision-making process, applicants may find comfort in having open lines of communication with a contact person at the organization while they are applying.

The more information applicants have about how your organization is using AI, the more likely they are to embrace it.

About the author
Richard Justenhoven is the product development director within Aon’s Assessment Solutions. A leading organizational psychologist, Richard is an acknowledged expert in the design, implementation and evaluation of online assessments and a sought after speaker about such topics.

- Advertisement -