Putting responsible AI into practice: IDnow’s work on bias mitigation

As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification.

London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings on reducing bias in artificial intelligence (AI) systems. Funded by the EU’s Horizon Europe program, the project brought together organizations from a consortium of leading universities, research centers, and private companies across Europe. 

IDnow, a leading identity verification platform provider in Europe, was directly involved in the implementation of the project as an industry partner. Through targeted research and testing, an optimized AI model was developed to significantly reduce bias in facial recognition, which is now integrated into IDnow’s solutions.

Combating algorithmic bias in practice

Facial recognition systems that leverage AI are increasingly used for digital identity verification, for example, when opening a bank account or registering for car sharing. Users take a digital image of their face, and AI compares it with their submitted ID photo. However, such systems can exhibit bias, leading to poorer results for certain demographic groups. This is due to the underrepresentation of minorities in public data sets, which can result in higher error rates for people with darker skin tones. 

A study by MIT Media Lab showed just how significant these discrepancies can be: while facial recognition systems had an error rate of only 0.8% for light-skinned men, the error rate for dark-skinned women was 34.7%. These figures clearly illustrate how unbalanced many AI systems are – and how urgent it is to rely on more diverse data. 

As part of MAMMOth, IDnow worked specifically to identify and minimize such biases in facial recognition – with the aim of increasing both fairness and reliability.

Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application. By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.

Montaser Awal, Director of AI & ML at IDnow.

Technological progress with measurable impact

As part of the project, IDnow investigated possible biases in its facial recognition algorithm, developed its own approaches to reduce these biases, and additionally tested bias mitigation methods proposed by other project partners.

For example, as ID photos often undergo color adjustments by issuing authorities, skin tone can play a challenging role, especially if the calibration is not optimized for darker skin tones. Such miscalibration can lead to inconsistencies between a selfie image and the person’s appearance in an ID photo.  

To solve this problem, IDnow used a style transfer method to expand the training data, which allowed the model to become more resilient to different conditions and significantly reduced the bias toward darker skin tones.

Tests on public and company-owned data sets showed that the new training method achieved an 8% increase in verification accuracy – while using only 25% of the original training data volume. Even more significantly, the accuracy difference between people with lighter and darker skin tones was reduced by over 50% – an important step toward fairer identity verification without compromising security or user-friendliness. 

The resulting improved AI model was integrated into IDnow’s identity verification solutions in March 2025 and has been in use ever since.

Setting the standard for responsible AI

In addition to specific product improvements, IDnow plans to use the open-source toolkit MAI-BIAS developed in the project in internal development and evaluation processes. This will allow fairness to be comprehensively tested and documented before new AI models are released in the future – an important contribution to responsible AI development. 

“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” adds Montaser Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”

Questions ?

Let's talk
Play