One of the things we often hear when talking to HR executives is that they are concerned being ethical in their recruitment process. I find this concern very honourable, however, when going more in depth with these processes, I often find that the concerns over ethical recruitment are mainly empty rhetoric, despite good intentions.
Most companies understand ethical recruitment as giving a fair chance to every candidate. The problem is that it is too general a goal to actually be useful in practice. When are you in fact giving an applicant a chance? When you have spent two minutes going over their CV? When you have checked their application for spelling errors?
The opposite effect.
When we say that everyone should get a chance, often what happens is the exact opposite. In the real world, information is not organised and easy to consume. The end result is inevitable: we make flawed assumptions and guesses about candidates.
While it depends on the position we are hiring for, the selection process as a rule is unethical simply because it is carried out by a human. We are prone to some degree of bias as recruiters.
National Academy of Sciences conducted a study among judges’ decision to put a prisoner on a parole or not. It was discovered that after the judge has had a meal, they were drastically more lenient. So next time you have a hearing before the lunch, be careful and give your judge a Snickers bar.
Whether we look at judges granting a prisoner parole or a recruiter “judging” a candidate, the cognitive biases apply. Experience is not necessarily a deterrent against this. In fact the more confidence the judge has in his own abilities, unfortunately, the more likely he is to fail in realising his own blind spots and rationalisations.
All too human.
So, while on the surface it sounds reassuring that a “real” human is making decisions, there are several reasons why this should makes us question the ethics of recruitment.
One of the reasons that humans are bad at seeing more abstract patterns is that we are wired to associate experiences with signals. If we put our hands in a flame, we get burnt, and hopefully we learn that fire hurts.
However, in hiring, the results are separated in time from the signal(s) that led to the decision in the first place. Looking back a year after hiring the wrong candidate and trying to figure out what made it seem like a good idea at the time is practically impossible, but that is the best case situation. Worst case, is not collecting or being around to get feedback on the performance of the candidate, and hence not learning from the initial decision.
Beaten at our own game.
A study by The National Bureau of Economic Research (NBER) showed that computers actually hire better candidates than recruiters in low-skill service-sectors. It might only be for lower skilled jobs at the moment, but there is significant potential.
The reality is that machines are excelling at looking past misguided—although generally subconscious—racial or gender stereotypes of performance.
This is not to say that machines should replace all recruiters, but failing to incorporate the information they can provide in the recruitment process is not only damaging to performance, it can reasonably be claimed to be unethical.