How IDnow’s latest collaborative research project, MAMMOth, is making the connected world a safer and fairer place for everyone.
While the ability of AI to optimize efficiencies and improve processes is well documented, there are also genuine concerns regarding its susceptibility to human bias and how unfair data processing can lead to the perpetuation of discriminatory practices and social inequality.
IDnow leverages AI in its proprietary algorithms to automate three critical tasks:
1. Capturing and parsing identity documents.
2. Ensuring document compliance and authenticity.
3. Verifying the document holder’s identity, including a facial verification step.
Research conducted by IDnow has revealed demographic biases in facial verification algorithms, primarily linked to underrepresentation of certain ethnic groups.
In a bid to break down the barriers of bias, IDnow has been collaborating with 12 European partners, including academic institutions, associations and private companies, as part of the MAMMOth project, for about a year.
Funded by the European Research Executive Agency, the goal of the three-year long project is to study existing biases and offer a toolkit for AI engineers, developers and data scientists so that they may better identify and mitigate biases in datasets and algorithm outputs.
Three use cases have been identified:
- Assessment of loan applications.
- Academic evaluation and visibility of academic work. In the academic world, the reputation of a researcher is often tied to the visibility of their scientific papers, and how frequently they are cited. Studies have shown that on certain search engines, women and authors coming from less prestigious countries/universities tend to be less represented.
- Face verification applied to identity verification.
IDnow will predominantly be focusing on the face verification use case, with the aim of implementing methods to mitigate biases found in algorithms.
Hearing the voice of the under-represented.
In an effort to provide a tool that meets the needs of those affected by bias, the project has the support of social science researchers at UNIBO, and three associations working to promote the voice of under-represented groups: IASIS, DAF, and DDG. Their involvement in MAMMOth enables IDnow to understand the needs and expectations of under-represented groups.
Through a series of questionnaires and workshops, the above partners have collected input from underrepresented groups, especially women and ethnic minorities. Gathered information has revealed apprehensions about being discriminated against by identity verification algorithms, emphasizing the necessity of ensuring algorithm reliability before use. These groups also advocated for human oversight of algorithm decisions.
Other concerns included possible data leakage, highlighting the need to address privacy-and-security-related issues when systems are developed. It’s also worth noting that some participants highlighted that human input could also be biased and that responsible AI use could reduce this risk.
AI for a responsible future.
At IDnow, we believe the importance of innovation lies in finding solutions to problems that do not yet exist but are sure to impact the future. That is why we have invested heavily in research and AI technologies.
“We are proud to be a part of such an important collaborative research project. These studies underscore the need for trustworthy, unbiased facial verification algorithms. This is the challenge that IDnow and MAMMOth partners aim to overcome during the remaining two years of the project,” said Lara.
By
Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn