. Trust in AI for Biometrics - Why it Matters | BSI
Contact Us
Search Icon

Suggested region and language based on your location

    Your current region and language

    Girl using facial identification technology to make online credit card payment
    • Blog
      Digital Trust

    Trust in AI for Biometrics - Why it Matters

    Tim McGarr, AI Market Development Lead, AI Regulatory Services explores the importance of trust in AI for biometrics and steps organizations can take.

    To understand why trust in AI for biometrics is so important, let’s start with a definition of biometrics data. According to the European Commission it’s defined as: Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.

    Due to the sensitivity of this data and potential threat to citizens’ rights, biometric technologies are included in the EU AI Act, which introduces new transparency requirements and even bans certain AI applications. The inclusion of biometrics is important as the AI-enabled biometric market is growing, particularly in areas such as security threat prevention.

    BSI research shows support for such regulation, with 97% of business leaders agreeing AI regulation or international guidelines are important for the safe, ethical and responsible use of AI.

    The introduction of guardrails has the potential to build trust amongst both business and individuals in AI applications, particularly in biometric technologies. With 52% of business leaders stating they are more worried about AI today than they were a year ago, this elevates the importance of putting in place measures to provide reassurance on use of biometrics – not only on a personal level, but to facilitate the relationship between businesses looking to work together.

    Trust in AI systems that do access such data, and the expectation of safe, responsible, and ethical processing, is now front and centre to help support the fundamental rights of individuals. This can be driven by a number of things, from regulations like the recent EU AI Act, to compliance with international standards and more broadly, transparency on the part of the providers. The key is to get to a place where organizations no longer just embed biometrics into an AI system without consideration of its impact on society.

    Steps to build trust in AI-based biometric systems.

    The EU AI Act classifies AI systems according to risk level, accounting for the potential impact they could have on safety and rights of individuals.

    Under the terms of the regulation, those AI systems that fall into high-risk categories automatically need to meet more stringent criteria to demonstrate the responsible development and use. In light of this, a critical first step for biometric identification providers is to determine the risk categorisation of the biometric identification system, considering things like:

    • Does it impact on the health of an individual or community?
    • What is the complexity of data held? Does this make it an attractive target  from a cybersecurity perspective?
    • Is consent required? Could someone’s privacy be at risk?
    • Could any fundamental rights be affected?
    • Does the AI model make and execute decisions without human intervention? 

    For high-risk AI-based biometric systems, it will be key to become familiar with the EU AI Act requirements, including the standards that can help with alignment. For lower risk systems, there is still the opportunity to lay out the steps you want to take to responsibly manage AI. This is particularly relevant with rapid model advances meaning a system can become high risk over time.

    Regardless of risk level, it is worth noting that 84% of business leaders agree buy-in is important for the successful roll-out of AI in business. Communication and training around AI policies and procedures will be instrumental to building trust in your organizations system.

    So, the key question will be: how are you going to not only make all teams aware of your AI policy and strategy, but also help them understand and contribute to its success?

    There is an opportunity to train your employees, particularly, data science and development teams as part of your trust agenda. It will be key to encourage those working with your data sets and building the models within your biometric products to think and document against the following principles at every stage of the systems lifecycle to support responsible AI.

    • Ethics
    • Fairness
    • Privacy
    • Security
    • Robustness
    • Explainability

    By clearly defining your risk profile and prioritizing communication and training across your organization, you have the potential to show trust in AI-based biometrics systems is high on your agenda, for the benefit of all stakeholders, particularly society at large.

    Want to learn more about helping your data science and development teams? Listen into our “From code to compliance” webinar on demand.