Technology vs Intuition – Can we really take the Human out of Human Resourcing?

Recent reports have revealed that more and more companies are using technology to take the place of humans in recruitment. Algorithms have taken over on job sites and AI (Artificial Intelligence) is being used by headhunters and larger companies in the selection process. But can these really take over from human interaction in building the best teams?

Most people will have noticed that the job market has been taken over by online job sites such as Indeed and Totaljobs where you can upload your CV and their algorithms pull out the keywords and present you with the jobs that most fit your experience. Even these I find fairly pointless. In the interests of  experimentation I tried uploading my CV to one of these but I don’t think the algorithms are ‘getting it’; working as a Technical Team Leader in a Telemedicine/Response Centre setting does not qualify me to be a Telecoms Engineer and how it decided I was suitably qualified to be a Personal Trainer I don’t know!

Besides, I would almost never send any employer my full and complete CV. Firstly, it would include a whole lot of job experience that has no interest to them at all (except for, perhaps, the conversational ‘bet you wouldn’t have guessed this about me’ parts). Secondly, it would be so long, even at 8pt font, it would go on for pages. Thirdly, even with the jobs (and contracts) I have done, I would highlight the parts of the job, or the particular contract, that were relevant to the one I was applying for – even for the human recruiter keywords are king.

Now, it seems, even when you have been shortlisted, technology is taking over. Many interviews are being carried out by video and algorithms and AI are deciding for the employers who they should employ. Their theory is that AI removes the possibility of bias in human interviewers, like employing people similar to themselves or judging by first impressions. They also read body language and facial cues to work out whether the candidate is being truthful about their experience etc.

The first problem I see with this is that, as anyone who has studied these things will tell you, you have to first ‘calibrate’ someone’s body language and facial cues before you can tell if they are lying (or exaggerating the truth). ‘Calibration’ means checking their normal behaviour when you know they are being truthful. Although there are standard clues and behaviours, we are not all the same and some people display behaviours that are out of line with the ‘normal’. Could this discriminate against those with Aspergers, Facial Dystonia or other ‘disabilities’ or even other ethnicities whose expressions and body language may be different?

My second issue with this method is the obvious one of not getting natural feedback from the interviewer. When someone is being interviewed by another human being they are able to pick up on cues given by the interviewer about whether they have given a full answer, whether they have gone on too long, what kind of information is being looked for etc. If this is seen as a bias then aren’t we treating the prospective employee as a robot rather than as a human who responds to other humans?

Perhaps, in time, we can teach AI that we are looking for certain types of people to build a team. Clearly we don’t want to employ a whole team of people who are exactly like the Manager, the Director, or whoever interviews them, but sometimes it is better to employ someone who hasn’t got exactly the right experience but who will fit into the team. Missing experience can be learned but the square peg may never fit into the round hole. I, for one, would rather find someone enthusiastic and willing than someone who has all the right keywords in their application.

In the past, when I used to interview people for jobs, I often found the best members of staff didn’t necessarily have the best CVs. A human can ‘read between the lines’ and get a gut feeling that something isn’t quite right; if they have all the right experience then you interview them anyway. More often than not your gut feeling was right, but sometimes, on meeting them, you find a person that you weren’t expecting (often because their CV or application had been written for them).

In the ‘80s & 90s psychometric tests were hailed as the way forward in employing staff. They were supposed to show if you had the right aptitude for the job and if you were the right fit for the company. Psychologists are still arguing whether something as complex as personality traits can be measured by these tests. One of the problems is that the employer has to set the traits they are looking for in the employee. They are often looking for someone just like the previous ‘perfect’ employee who has moved on (perhaps a new way of looking at the job by a new employee can improve processes). Sometimes the analysis of the data is missing something (many people are different at work than they are in other settings, such as the organised worker whose home life is shambolic). I remember going through these tests years ago, including an IQ test that said my IQ was 156 (which would make me some sort of genius – clearly not right!) and a personality test that decided I was a pessimist because I looked for the problems in every situation – a large part of my previous job had been to do Risk Analyses so, yes I probably did, but with a view to making sure they didn’t happen!

My concern is that we are learning not to trust our own judgement.

I can understand a large company using AI to filter the hundreds of applicants they have for a position down to a few. In fact the headhunting AI ‘Helena’ from Woo is providing applicants that are being accepted for interview at more than double the percentage of the human headhunters and, apparently, some surprising ones that the humans wouldn’t have picked up.

Using AI to replace interviews with real people is unlikely to prove quite as positive in building a successful team. It is exactly those qualities of AI that make it useful for shortlisting that make it impractical for recruiting. Those ‘first impressions’ that these AI methods are intended to override tend to be right. There is something about meeting someone face to face which, as humans, we pick up on. Call it what you will – aura, energy, impression, sensation, pheromones; we instinctively know something about them. Can we trust them, work with them, like them? AI might decide we are wrong about them being the right person for the job but can it decide whether we will like them? This may not seem important to a large company – until they have a team at loggerheads with each other. As Google’s research into producing the perfect team showed, it is not just having the right mix of knowledge and experience that creates the best result, it is the trust that each member have in each other and the team as a whole.

I have no doubt that AI will become more and more a part of the recruitment process, especially as AI itself learns and improves, but I believe there will still be a part to be played by humans in the recruitment of humans. Besides, do you really want to work for a company that values you, as a person, so little that you don’t meet another human being in the recruitment process until you have got the job?

%d bloggers like this: