IF INTERESTED #156: Facial Expression Recognition
Scary and hopeful future of virtual meetings and hiring videos
hi,
Two recent news about facial expression recognition hit the agenda of business tech:
A startup Uniphore now can determine the mood of the person in the video meeting and guide the facilitator accordingly with an AI solution. The first use case is sales meetings, and the AI is said to be able to coach the sales person. For example if theAi detects that the client is frustrated, it coaches the sales representative to be more emphatic and listen at that moment. And Zoom with its all virtual marketshare is jumping aboard that as well. Here is the link to the news.
Intel and Classroom claim that their AI can detect during a virtual class, if the students are bored or disengaged, and coach the teacher accordingly. AI run on Zoom and aimed to help teachers to be more perceptive of their class’ context. From where I am standing, it is the same application as above, only sounds more humane because it is aimed for better education. Here is the link to the news.
In both of the cases the AI is a black box and questionable at best. They are claiming that the AI is being trained for different racial expressions and is / will be accurate.
I mentioned the use case with hiring a few months back in this article. Video interview companies (to categorise them crudely : ), claim to analyse the emotions of candidates from their facial expressions and give insights about their answers’ reliability and other insights to hiring managers and recruiters. Pymetrics, Hirevue and Modern Hire are some examples for those companies, Pymetrics taking the lead.
Regulators are late in the game, as usual. We haven’t seen any protection of individual privacy or any deep analyses into the working of those AI algorithms yet. There was only news of New York City considering regulations for those AI hiring tools, but nothing substantial has come out of it yet. HireVue decided to re-examine its AI hiring tools and removed facial analyses from its tools in January 2021.
It is all nice and fancy with minimum viable products and agile methodology on paper, but when it comes to AI tools that impact peoples career and lives, we should be questioning and pushing those companies to be more responsible.
I also don’t want to sound all negative about the technology. We know that machine learning algorithms, NLP’s and facial expression recognition will be much fairer than humans evaluating candidates. I am all for the technology and hopeful for the future, we just need to put more care and healthy questioning into those tools, so that they don’t scale up the human biases.
If interested.
burak