GROW YOUR TECH STARTUP

Emotion analytics used in AI recruitment tools are not only unethical but incorrect

February 27, 2020

SHARE

facebook icon facebook icon

A candidate is preparing for a job interview via video. They place themselves in front of the camera, try to stay engaged, and smile throughout the process.

Unknown to the candidate, the interviewers are using software that analyses even the smallest of facial expressions.

Why? The technology is intended to cross-reference facial movements and body language to determine how suitable the person is for the role. It is also meant to deem how successful a candidate will be in the position.

Slightly more unnerving, the software is supposed to detect when interviewees lie. Gone are the days then, of embellishing a resume or fabricating relevant experience to impress employers.

Called ‘emotion analytics’, this form of AI is not a dystopian future, but a reality in both South Korea and the United States.

The interview process

The process can vary depending on the software but generally, the emotion analytics test begins with applicants completing a 20-minute task consisting of neuroscience-based video games, with each decision revealing something about their personality. Attitude towards risk features heavily in this section, as employers can supposedly learn if the candidate is a potential liability.

This section is the first filtering of applicants based on their emotion characteristics. And what’s concerning with the gamified element is that many older candidates are unfamiliar with the format, meaning they are automatically at a disadvantage in the selection process.

The next step is the video interview. Preset questions are answered via a call on a mobile, tablet, or computer. AI technology then captures movements — both voluntary and involuntary — to assess the person’s mood and traits.

The data from both steps is collected and reviewed, and a score is generated to reflect the candidates’ level of ’employability’ for the job and in comparison to other applicants.

Concerns over privacy and transparency

HireVue is a US-based organisation that claims to ‘deliver the best talent, faster with video interviews and AI-driven pre-hire assessments’.

HireVue states that it uses proprietary machine-learning algorithms to collate non-verbal clues including eye movements, clothing details, and voice nuances to make a representative profile of a candidate.

“Businesses frequently fail to demonstrate that AI decision-making tools are accurate, reliable, or necessary”

Earlier this year, however, HireVue faced fierce opposition from the Electronic Privacy Information Center (EPIC). EPIC is a Washington DC public interest research center focusing on privacy issues.

In a petition filed on February 3 to the Federal Trade Commission (FTC), EPIC requested fair trade practices around the commercial use of AI across a number of industries. In terms of recruitment, EPIC accuses HireVue of disregarding national standards of fairness and transparency.

In the petition, EPIC states that “businesses frequently fail to demonstrate that AI decision-making tools are accurate, reliable, or necessary”.

Editor’s note: after the publication of this article, a spokesperson for HireVue reached out to The Sociable with the following statement to clarify that:

“HireVue’s assessments have never ‘read emotions’, attempt to ‘guess’ at the inner states of candidates or read body language, but focus exclusively on job-related skills and competencies identified and supported through a detailed job analysis and well-designed interview questions that reduce the significant impact of human biases”.

Companies using AI emotion analytics 

Unilever uses emotion analytics (Photo source: LinkedIn)

Unilever, Dunkin Donuts, and IBM are just a few of the large corporations utilising emotion analytics. At Unilever, AI recruitment tech is reported to have saved the consumer goods company 50,000 hours worth of hiring work in 2019.

Currently, the tech is mostly used for online applications to entry-level jobs — perhaps suggesting that these positions attract less-trustworthy candidates. Or instead, highlighting that the tech is not yet credible enough to be applied to high-level roles.

Overall, the emotion analytics industry is already set to be worth $25 billion by 2025.

‘No scientific basis’

AI Now Institute has called to ban emotion analytics (Photo source: LinkedIn)

Elsewhere, research institute AI Now released a report in December last year stating that emotion analytics has no scientific basis and should be banned from influencing decisions about people’s livelihoods.

The report calls for businesses and governments to ban emotion analytics technology until a more in-depth study of the risks involved has been conducted. AI Now went further still to condemn the “systemic racism, misogyny, and lack of diversity” in the AI industry as a whole.

Other issues arise when taking into account that humans transmit an immense volume of information when expressing themselves. Emotion analytics focus on an incredibly small and limited part of this information, mostly stemming from the face.

“Emotion analytics focus on an incredibly small and limited part of this information, mostly stemming from the face”

A paper by psychologist Lisa Feldman Barrett, recently argued that a configuration of facial movements can communicate something other than an emotional state. Barrett advocates that much more research is needed into how people move their face to express emotion and other social information.

Not to mention, Barrett believes there is a severe need for greater scrutiny of the mechanisms used to perceive instances of emotion in fellow humans.

The results generated by emotion analytics then are merely an estimation of an individual’s emotions as perceived by a machine and then mediated to other (potentially untrained) people. The process of translation is not direct, and nor does it connect with any real sense of human experience.

The HireVue spokesperson added, “We agree and support Dr. Barret’s observations well as the referenced research”.

Psychologists have said there is a severe need for greater scrutiny of the mechanisms used to perceive emotion in humans.

Potential to manipulate the tech

As more and more companies incorporate emotion analytics into their hiring process, and job-seekers begin to prepare for interviews with an algorithm, another worry has surfaced. Is it possible to cheat the technology?

Speaking to the Financial Times, facial recognition expert Paul Ekman remarked on people’s ability to manipulate their emotions. He claimed that if people know they’re being observed, they’re more self-conscious and can change their behaviour. Especially in a competitive environment like an interview, people are likely to want to change their emotions to secure the job.

“If people know they’re being observed, they’re more self-conscious and can change their behaviour”

The irony then, is that if applicants can learn to trick emotion analytics’ systems (which is certainly possible considering the model is yet to be scientifically proven), employers would actively be recruiting a dishonest workforce.

Nonetheless, because of the short-term benefits of emotion analytics — a faster, streamlined hiring process, and an increase in the number of new hires — the technology looks set to stay in recruitment for the foreseeable future.

Facial recognition is the new polygraph test for insurers

Article updated on March 2 to include the statements from the HireVue spokesperson.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending