Can Our Customers Deceive a Facial Recognition Software?
- 1.Emotions written on people’s face
- 2.Can we detect these different facial expressions?
- 3.Recognizing faces (and lies on them) is a trained skill
- 4.How to uncover the truth: rely on multiple sources
- 5.The Takeaway
A genuine smile needs two elements:
- The zygomaticus major muscle must contract, which allows the corner of the lip to rise to form a smile.
- The orbicularis oculi muscle should be activated, which surrounds the eye and raises the cheek while lowering the eyebrows.
If we don’t detect both facial movements, the smile is probably fake. It’s one that doesn’t happen spontaneously and it’s rather voluntary. One that is forced and not genuine.
A person can have difficulties detecting these muscle differences. If we can understand what lies behind face movements, can we detect lies and deceptions with facial coding?
Emotions written on people’s face
We usually associate facial analysis with ID control for security reasons, but there’s much more information we can get from a person’s face.
Our face shows our internal emotional states and by looking at our facial expressions we’ll know what’s happening inside.
Usually, we don’t control our facial gestures. These movements happen on their own without our awareness.
However, there are times when we want to conceal our emotions, and we consciously adjust our reactions. We can intentionally manipulate them in the following ways:
- Simulate: it’s an expression that is not accompanied by a genuine emotion. We make a face we don’t really mean.
- Mask: when the expression is replaced by a false expression that really matches another emotion. For instance, we might laugh but deep down we’re drowning in sorrow.
- Neutralize: when the expression of a true emotion is inhibited and the face remains neutral. For instance, we might feel angry but we maintain a poker face to hide our true emotions.
When we try to hide information, the true emotion might emerge in different ways. It can be shown by a set of micro-expressions (too fast to catch), subtle gestures (that might be ignored), or as more lasting gestures. People inevitably reveal their genuine emotion, especially if they’re so eager to cover it up.
Can we detect these different facial expressions?
When it comes to micro-expressions, a study found that micro-gestures of true emotion are generated within the first milliseconds of a reaction (between 40 milliseconds and 0.2 seconds) before being suppressed by one of the 3 options mentioned above.
A facial coding software like ours has the capacity to process hundreds of frames per second and is able to detect these sudden face movements. Plus, thanks to its AI engine, it can learn over time to become even more precise with its results.
AI might be more powerful than human beings in detecting these gestures, simply because of its processing power. However, there’s still some difficulty with subtle expressions. According to a recent study, humans are still better at recognizing these emotions than a facial coding software. When presented with images of people containing subtle emotional expressions, participants were able to classify them better than the software.
Part of the problem is that databases are filled with stereotypical facial expressions so when they analyze faces in natural environments, facial recognition doesn’t perform as well as we’d hope for. Maybe it’s a matter of time before AI can surpass our ability to detect and classify these subtleties better than us.
When it comes to lasting or more pronounced gestures we see that some emotions are easier to hide than others. People look more genuine when imitating a real smile than when they imitate negative emotions. A reason is that people have more experience faking smiles in their daily life than these other emotions. Also, negative emotions use more muscles to express the underlying emotion, which makes it even more difficult to control than smiling. If you try controlling a set of muscles that usually happen involuntarily, it becomes quite challenging, right?
Recognizing faces (and lies on them) is a trained skill
Most people can see when a child is lying, but is it as easy detecting it with an adult? It’s challenging, especially if you have met them just once or twice.
The same happens with facial recognition to detect lies. To perform well, you must train your software with a large dataset that acts as a point of reference to distinguish genuine from fake emotional reactions. The database must include both involuntary, voluntary, and spontaneous expressions to check these with your study’s sample and come up with a precise analysis.
Trained models can detect human expressions that occur in milliseconds, but to detect lies more accurately they should include different sources of information. This includes using body language, tone of voice, and semantic analysis, as well as facial coding.
The following example might help illustrate that a more comprehensive approach is needed to investigate this phenomenon.
How to uncover the truth: rely on multiple sources
The United States and Canadian governments are implementing a new lie detection system at their border crossing called AVATAR (Automated Virtual Agent for Real-Time Truth Assessment).
People interact with a virtual agent with pre-selected questions. The system looks at the facial features, tone of voice, and a transcript of the interview and reports back stating if there’s been any kind of lie or deception by the interviewee. If you don’t pass this interview, you’re sent to a safety officer to assess you more thoroughly.
The goal is to speed up the process and make border controls more efficient by relying on the power of AI. And this system has come up with some amazing results. Currently, this system has a lie-detecting accuracy between 80% and 85%, a figure much higher than 54% of the effectiveness of humans. It seems we might be interacting with AI more often than we think in the near future.
Our face reveals more than we care to share and facial analysis is using this to help different industries meet their goals.
Our facial reactions are mostly involuntary, but we can also conceal them at least in three different ways. We can simulate an emotion that’s not there, mask it with another one, or try to inhibit an emotion that’s trying to get out.
These emotional reactions show up on our faces as micro-expressions, subtle cues, or long-lasting gestures. And it’s up to skilled people or trained software to try to detect if we’re trying to deceive it.
In some cases humans perform better at detecting emotions and in other instances, AI engines clearly surpass people’s detection skills.
And remember, the more sources of information we have, the more accurate the software can be. Check out our AI software and find out what people think and feel about your content.