AI’s Role in the Hiring Process

by Orin Davis is the Principal Consultant at Quality of Life Laboratory Consulting

Like many psychologists, I am highly dubious of the role that AI plays in processes that require discretion, creativity, and judgment. This is especially true in critical areas of life and business, like hiring. Both the livelihood of an individual and the success of a company depend upon making good decisions. AI is simply nowhere near capable of doing so now or in the foreseeable future. As noted by many (here‘s a good overview), AI is riddled with biases and overfit models. It can only be as good as the humans who program it.

But, this doesn’t mean we scrap the AI systems that we have, or label all current and future uses of AI in hiring as unethical (though a great many are). Instead, we need to understand the valuable adjunctive role that AI can play in our hiring processes, which in turn can make them more fair and ethical.

To see where AI can play a role in 21st Century hiring, it’s important to have a clear picture of how to hire. As I detailed elsewhere, there are seven major steps to the hiring process:

  1. Understand your company: its value proposition, culture, indicators of fit with culture. These should all include a diversity and inclusion strategy (cheat sheet).
  2. Understand the position: conduct a detailed job analysis
  3. Determine whether the candidates have the right skills
  4. Review the resume like a portfolio of experiences and ignore any identifying information (alma mater, name, gender, etc.)
  5. Run a phone screen or review cover letters from any candidates remaining (you don’t need both; solicit letters of interest only from candidates that made it this far)
  6. Conduct in-person interviews
  7. Make and communicate hiring decisions (be prompt about this for everyone!)

Steps 1, 2, 5, and 6 require significant creativity and discretion, so AI shouldn’t be used for any of them. Results would be erroneous at best, and unethical/illegal at worst. Using AI in Step 7 is overkill, but certainly possible. This leaves Steps 3 and 4, which are indeed where AI can shine.

Parsing Resumes With AI

Employing AI to parse resumes/portfolios must be done carefully. The temptation is to have the system seek out a set of keywords that fit the job in combination with a certain number of years of experience, and simply eliminate anyone that doesn’t have the right stuff. As convenient as that may be, you’re likely to miss a great candidate that way. Instead, start by using your creativity and discretion to plan in advance what you want to see on the resume. Think about years of experience and the keywords you would like to see, but also think about what kinds of other knowledge, skills, and experiences would contribute to the company. Put that highly-developed framework into the AI system, and use it to seek out those attributes on resumes. Then, select candidates that are high, medium, and low fits on that basis.

From there, you need to give the AI feedback about how well it did. You do this by scoring the same resumes without seeing how the AI did it, and then correcting the AI’s results accordingly. Rinse and repeat, rinse and repeat. After three rounds, your AI has a pretty good idea of what you want, which means you can now use it to help you pick candidates. The key is to have the AI order the list of resumes and highlight key terms. This way, you know which ones to review first. Every time you review a candidate, you should give feedback to the AI system about whether it did well. As a check, the AI should be sticking one random candidate in every set of 10-15 as a check standard. In all cases, remember that you do the candidate selection, not the AI.

Conducting Skill Tests with AI

One of the most important parts of candidate assessment is ascertaining whether the applicant can do the actual tasks that will be required on the job! But before you put in the time to look at resumes and cover letters, you should make sure that the portfolio you’re reviewing belongs to someone who can walk the talk. Building on the tasks that the individual would have to do on the job, you can design a test that anyone capable of doing the job can pass, but those without the requisite skills will fail. It is generally a good idea to team up the hiring managers with an HR expert and/or an Organizational Psychologist. This is to make sure the test is appropriate and compliant with EEOC and ADA guidelines. As you design the test, consider the range of possible answers, both better and worse. Prepare for the fact that some of the best responses will almost certainly surprise you. Give the AI system your range of answers, then show it examples of responses that are high, medium, and low quality.

As with the resume parsing, you then need to run the AI in parallel with manual grading for a while, comparing how well it fits the expectations of human graders and providing feedback to train the system. At that point, the AI will be ready to suggest which candidates are more likely to have given good answers. This means you can spend more time reviewing the answers of demonstrably qualified applicants. In all cases, it is crucial to keep giving feedback to the AI system, and likewise to continue to review random candidates that (unbeknownst to the grader) did not receive high scores. This ensures continuous updating of the AI’s algorithms and that the AI is adjunctive rather than serving as the decision-maker.

Addressing Concerns About Demographic Bias

One might have hoped that the issue of demographic bias would be ancient history by now. However, it remains prevalent in every circle of humanity. It is no surprise, then, that the same biases will show up in computer algorithms designed by us biased humans. Where AI can actually protect us from ourselves, is the fact that it can also have additional algorithms that look for biases and alert humans when the results look a bit too homogeneous. The system can continually scan for questions where scores tend to vary by demographic, and likewise for patterns of scores or responses that are more prevalent in certain demographics. Because these algorithms are separate from the analyses of resumes and skill test responses, they are designed by us to keep us honest. They will do so, too, with the cold impartiality of a robot with no emotional skin in the game. Getting a red flag on demographics is an early warning that the test needs to be reviewed and possibly revised. This means an ever-evolving hiring process that moves inexorably closer to the meritocratic ideals that bring the best talent to companies.

About the author
Orin Davis is the Principal Consultant at Quality of Life Laboratory Consulting, and a lecturer/adjunct professor at different universities. He Makes workplaces great places to work by applying positive psychology to hiring, engaging, and maximizing talent.