The liveness advantage: Transforming deepfake vulnerabilities into trust opportunities.

Presentation attacks and deepfakes pose more than just technological threats — they challenge the very foundation of trust that powers every online interaction. Thankfully, IDnow remains dedicated to offering the industry’s most dynamic defense system against AI-generated attacks.

Although face verification has become an affordable and efficient way to authenticate user identities, it remains a vulnerable part of the identity verification process and highly susceptible to fraud attacks.  

Businesses that wish to protect themselves – and their customers – from the increasingly tech-savvy fraudster must ensure they leverage the latest technologies, including liveness detection, to stay one step ahead. If they don’t, they run the risk of joining the growing list of European companies that lose millions in attacks every year.  

IDnow continually enhances its face verification and liveness detection capabilities — not as isolated features, but as integral components of our comprehensive trust ecosystem.  

Interested in how IDnow has been working to mitigate skin tone bias in face verification systems? Read our blog, ‘A synthetic solution? Facing up to identity verification bias.’

What is liveness detection?

Liveness detection leverages biometric characteristics, such as facial features or fingerprints, to detect whether the presented subject is a living person. In the case of remote face verification, it ensures the captured digital image features a real face.  

Liveness detection also flags so-called ‘unreal faces’, which fraudsters present during the selfie submission stage of face verification. In these presentation attacks, fraudsters present printed photos, videos on monitors, or even t-shirts or personalized hygienic masks in front of the recording device. 

Liveness detection should also protect against new attack vectors, including those that use invisible noise and patterns to camouflage the attack. The attacker may also choose to exploit capture conditions, such as backlighting, to make detection more difficult. 

While presentation attacks have been known to unlock mobile devices, they also pose significant risk during the face verification steps of identity verification and KYC processes. It is for this reason that businesses must ensure they have a robust identity verification process that includes liveness detection, presentation attack detection (PAD), injection detection, and deepfake detection.

How does presentation attack detection (PAD) work?

Thankfully, presentation attacks do leave clues that can be identified by liveness experts, such as moiré patterns (interference patterns that can be subtle and take many forms, such as color alteration, face deformation, or unnatural motion) that appear when recording a screen.  

Well-crafted attacks can be difficult to identify by the human eye, but thanks to the rise of deep learning-based AI, PAD is becoming more effective. Trained on massive volumes of data, those models can detect subtle clues and anomalies. Some can point out invisible patterns, while others can detect unusual textures in image patches. Researchers have also managed to detect infinitesimal face color variations due to blood pulsation.  

PAD has improved dramatically in both usage and accuracy over the years and remains an incredibly important area for online fraud prevention. To keep up to date with industry developments and reinforce our detection capabilities, IDnow regularly participates in initiatives like the SOTERIA project and research papers like the ‘A Novel and Responsible Dataset for Face Presentation Attack Detection on Mobile Devices,’ which was presented at the IJCB 2024 conference.

Nathan Ramoly uses deepfake technology.
Nathan Ramoly shows how easy it is to create a deepfake.

Deepfakes and the double edge sword of AI.

While artificial intelligence has enabled new fraud detection capabilities, it has also proven a powerful tool for attacking KYC systems using deepfakes (AI-generated or heavily modified faces).  

In the not-too-distant past, the creation of deepfakes was only able to be done by experts, but now, thanks to easy-to-use multiple websites and apps, even those with little technological expertise can generate deepfakes in just a few seconds. 

There are three main categories of deepfakes: 

  • Generated faces: Using Generative AI, the attacker creates a face to resemble the target or creates a whole new fake identity.  
  • Face reenactment: The attacker acquires a picture of a target’s face, typically from social media, and uses AI to animate the image; adding motion to a static picture to create a fake video from a single photo. 
  • Face swap: Here, the attacker acquires a picture of the target’s face and performs a capture of themself. They then upload both images to a deepfake tool that will extract the biometric traits of the target and modify the attacker’s face in a new capture that resembles the target. 

Once generated, fraudsters inject deepfakes into the KYC process through ‘virtual cameras,’ where images are presented as regular selfies. Sophisticated deepfakes mimic actual capture sessions, with natural motion, lighting and texture that does not deform the face. As such, deepfakes can bypass both face verification and PAD, which makes the business case for implementing robust liveness detection even more compelling. 

To address the complexity of threats posed by deepfakes, we combine our proprietary AI-based detection technologies, including liveness detection, with solutions from our trusted partners. This hybrid approach enables us to offer a powerful and adaptive fraud prevention system that complies with European regulations, without compromising the user experience and appetite for smooth and secure onboarding experiences.

How IDnow’s liveness detection works.

Our liveness detection technology features three separate checks: 

  • Presentation attack detection to flag when living faces are replaced by other media, such as pictures or masks. 
  • Injection attack detection to prevent the injection of harmful code or commands into an application, database, or other system.  
  • Deepfake detection to detect face alteration crafted by advanced AI.

Facing up to the future of liveness attacks.

As the ability to detect deepfakes improves, we can expect new types of attacks to emerge, especially adversarial attacks, which aim to exploit new-found weaknesses in AI models.  

By seamlessly integrating our proprietary AI-driven liveness detection with complementary technologies from our trusted partner network, we’ve created a dynamic defense system that anticipates and neutralizes emerging threats before they can compromise security. 

Plus, our commitment extends beyond our current capabilities. We’ve positioned our platform to evolve continuously, ensuring that as deepfake technologies advance, our detection and prevention mechanisms remain at the forefront of innovation. In doing so, we enable businesses to build lasting customer relationships founded on unshakeable trust, regardless of how the threat landscape transforms. 

In this new era where digital identity verification faces unprecedented challenges, IDnow stands as the trusted European leader in identity verification technology, empowering businesses to turn potential vulnerabilities into competitive advantages through intelligent, adaptive and continuous trust management.

By

The liveness advantage: Transforming deepfake vulnerabilities into trust opportunities. 1

Nathan Ramoly
Research Scientist, Biometrics Team
Connect with Nathan on LinkedIn

Questions ?

Let's talk
Play