Contact Us
Search Icon

Suggested region and language based on your locale

    Current region and language based on your locale

    Accept
    Two Software Engineers Talking Next to a Big Screen in a Modern Monitoring Office with Live CCTV Footage with Traffic Situation. Monitoring Room Big Data Engineers Work on Computers.
    • Blog
      Digital Trust

    AI Standards – Application & Interaction in Assessments

    AI offers enormous potential to help address some of societies biggest challenges —ranging from using predictive analytics to reduce climate impact to accelerating diagnoses for healthier, longer lives. To foster and speed up this adoption, we explore various opportunities within AI governance to build trust and advance its use. We start by examining how standards are applied and how they interact with assessments of AI systems.

    Trustworthiness in artificial intelligence (AI) systems is a key aspect of AI governance. While the definition of trustworthiness can vary based on the specific application or use case, there are some commonly accepted characteristics that fall under this concept. These include transparency, robustness, human oversight, data governance, and risk management. Together, these elements help ensure that AI systems are reliable, functional, and aligned with ethical standards. Failing to achieve these qualities can result in potential harm to users.

    To support the development of trustworthy AI, many academic research papers and international standards have been established. The European Union’s (EU) AI Act outlines a governance framework for AI development, detailing how AI should be evaluated, documented, and recorded. While the AI Act doesn’t mandate specific standards, it encourages harmonization across all stages of the AI lifecycle, ensuring that each critical point aligns with the Act’s requirements. This framework provides a solid foundation for decision-makers, such as AI developers, to work responsibly. As part of this approach, the concept of "presumption of conformity" emerges, which assumes that an AI system meets regulatory requirements if it aligns with these recognized standards.

    This article will explore the EU AI Act and AI standards, focusing on their relationship in assessing AI systems. We will also explain the harmonization process and the presumption of conformity, both of which help organizations comply with the EU AI Act and enhance the trustworthiness of their AI systems by adhering to globally recognized standards.

    Introduction

    In recent years, AI technology has made significant strides, prompting widespread debate about when, or if, Artificial Intelligence will reach or surpass human capabilities. As AI evolves rapidly, it introduces new risks, highlighting the urgent need for effective governance. Proper oversight is crucial to ensure the safe development of AI, mitigate potential hazards, and build public trust in its beneficial uses.

    In this article, we will explore the EU AI Act and its relationship with AI standards. We will review key sections of the EU AI Act that outline the requirements for high-risk AI systems, their significance, and how they are considered when evaluating compliance. Additionally, we will examine concepts such as "state-of-the-art" (SOTA), harmonization, and the presumption of conformity. Finally, we’ll discuss how the EU AI Act’s articles may align with ISO Standards, offering guidance for AI system assessments.

    The EU AI Act and Its Articles

    Let’s begin with an overview of the EU AI Act. This Act establishes a legal framework outlining the requirements that AI products must meet to be legally placed on the market within EU member states. Compliance with these requirements ensures that the AI system is approved for market entry, allowing it to be legally deployed within the EU.

    In the following, we will try to answer three fundamental questions:

    1. What are these requirements?
    2. Why are they important?
    3. How do we understand if we are compliant?

    What Are These Requirements, And Why Are They Important?

    To address the first two questions, let's begin by examining some of the key elements of the EU AI Act that form the foundation of its governance framework. Specifically, we will review eight crucial Articles of the Act, which outline the requirements for high-risk AI systems. These Articles provide clear guidelines for ensuring compliance and safe deployment of AI technologies classified as high-risk.

    Article 9—Risk Management System. It is well known that risk management includes risk assessment, evaluation, mitigation, and control. The idea of Article 9 is to anticipate and reduce risks before they affect the end-user; this is important because it's about proactive rather than reactive safety and catching potential issues before they become actual.

    Article 10—Data and Data Governance: These are the cornerstones of any AI system. We're talking about the need for meticulous labelling, careful preprocessing, and diligent efforts to mitigate bias during the training, validation, and testing. But why is this important? Well, data quality dictates the model's quality, and poor data practices can result in a model that is ineffective and potentially harmful.

    Article 11—Technical Documentation. Like any other technology, AI systems require comprehensive documentation; this includes a clear statement of the intended purpose, the underlying model design, and the software used in creating the AI. Moreover, ensuring process quality within this documentation is critical for meeting EU standards.

    Article 12—Record Keeping: this involves detailed recording of events, maintaining logs, and continuous post-market monitoring. The goal is to maintain traceability and enable robust risk management practices. These records are vital for accountability and the ongoing evaluation of the AI product's impact after its release.

    Article 13—Transparency and Information to Users: these two concepts are crucial to fostering an environment of trust between the technology and its users. Transparency is critical—it's about making the AI's intended purpose clear, providing accessible instructions for use, and ensuring the end user can understand and interpret the AI's functions and outputs.

    Article 14—Human Oversight. The principle here is control and safety. AI systems must always be designed to be under human oversight: this ensures that no matter the autonomy of the AI, the final decision-making power rests with a human who can understand and mitigate any risks to users' safety. The importance of human oversight cannot be overstated—it's our safeguard against unforeseen issues and a core principle of responsible AI development.

    Article 15—Accuracy, Robustness, and Cybersecurity. The EU AI Act requires that the model is accurate and robust against various conditions and potential cyber threats; this involves incorporating privacy considerations from the start. It's crucial because it directly impacts user trust—people need to feel confident that their data is secure and that the AI's decisions are reliable.

    Article 17—Quality Management System. The main topic here is ensuring ongoing compliance with the EU AI Act through continuous quality control and quality assurance processes. The goal is to have a systematic approach to maintaining the standards set by the EU AI Act throughout the life cycle of the AI system. We must recognize that Article 17 is the backbone that supports all the other compliance efforts.

    Having reviewed the main points of the EU AI Act, how can we check if these requirements are satisfied when conducting assessments?

    How do we understand if we are compliant?

    We could evaluate whether AI systems comply with the EU AI Act, but given that numerous AI standards have already been published and many more are in development, we should also assess how well our AI models align with globally recognized standards, particularly those from the International Organization for Standardization (ISO).

    Some of these standards are likely to be harmonized with the EU AI Act. But what exactly does harmonization entail? When standards are harmonized with the AI Act, it means that following a specific standard can satisfy the corresponding articles in the Act. This creates a "presumption of conformity," where compliance with a harmonized standard is assumed to meet the requirements of the AI Act. By doing so, we achieve two key objectives: harmonization ensures that AI systems not only comply with the AI Act but also align with internationally recognized standards.

    This process simplifies compliance, reinforces the credibility of AI systems, and boosts their trustworthiness. The added benefit is that integrating these standards into compliance checks aligns us with global best practices, often referred to as the State of the Art (SOTA), ensuring our AI systems meet the highest quality and reliability standards worldwide.

    ISO Standards and The Articles

    As we mentioned earlier, many standards are already available, with more being developed over time. In this discussion, we will focus specifically on ISO standards.

    To check for compliance with the EU AI Act (AIA) using ISO standards, the key question is: Which ISO standards should we use? With a variety of standards covering different aspects of AI, it's important to select those most relevant to the Act's requirements. Some ISO standards address topics like AI risk management, transparency, and ethical considerations, all of which are critical for high-risk AI systems under the AI Act.

    EU AIA Articles

    ISO Standards

    9: Risk Management System

    23894, 42001, 5338​

    10: Data & Data Governance

    4213, 24027, 24029-1, 5259-1, 2, 3, 4, 5​

    11: Technical Documentation

    23894, 42001​

    12: Record Keeping

    23894​

    13: Transparency and Provision of Information to Deployers

    24028, 23984​

    14: Human Oversight

    23894, 42001​

    15: Accuracy, Robustness and Cybersecurity

    4213, 24029-1​

    17: Quality Management System

    42001, 24029-1, 23894, 5259-3,4​

    The table in the figure provides a detailed overview of several ISO standards, listed in the right-hand column, alongside eight key articles of the EU AI Act in the left-hand column, which were discussed in the previous section. This table demonstrates how specific ISO standards could correspond to the articles of the EU AI Act, offering a potential roadmap for aligning AI systems with both the Act and global standards. Once the harmonization process is complete, a similar official table will be published.

    Some ISO standards appear frequently in the table. For example, ISO 4213: Assessment of Machine Learning Classification Performance is crucial for evaluating machine learning and AI systems. It directly relates to Article 10 (Data & Data Governance) and Article 15 (Accuracy, Robustness, and Cybersecurity), addressing essential aspects like data quality and system performance.

    Another significant standard is ISO 24029-1: Assessment of the Robustness of Neural Networks, which focuses on evaluating how well AI models perform under small input variations. This standard is tied to Article 10 (Data & Data Governance), Article 15 (Accuracy, Robustness, and Cybersecurity), and Article 17 (Quality Management Systems), as robustness is a critical element of these areas.

    For Article 10 (Data & Data Governance), two additional standards stand out:

    • ISO 24027: Bias in AI Systems and AI-Aided Decision Making, which helps detect and mitigate bias when constructing datasets for AI systems.
    • ISO 5259: Data Quality for Analytics and Machine Learning (ML) (a series of five standards), which provides best practices for creating data process frameworks.

    Furthermore, management system standards play a pivotal role across all articles of the EU AI Act. ISO 23894: Guidance on Risk Management and ISO 42001: Management System are foundational to building effective quality management systems, which are essential for creating reliable AI models.

    It’s worth noting that other organizations, such as the IEEE, are also developing standards that will complement ISO’s work. These future standards will expand the compliance and quality assurance frameworks, offering even more robust guidelines for AI governance.

    Conclusion

    The EU AI Act marks a crucial step toward comprehensive AI governance by outlining key requirements for high-risk AI systems. A central part of this effort is the harmonization process, which brings several important benefits:

    1. It offers a streamlined path to compliance with the EU AI Act.
    2. It promotes alignment with globally recognized best practices and state-of-the-art methods.
    3. It simplifies the regulatory process through the presumption of conformity.
    4. It enhances the credibility and reliability of AI systems on a global scale.

    Together, the alignment between the EU AI Act and international standards paves the way for a future where AI systems are robust, innovative, transparent, accountable, and firmly rooted in human values and ethical principles.

     

    Authors:

    Dilara Gumusbas Donida Labati received the M.Sc. degree from the Faculty of Electronics and Computer Science, University of Southampton, Southampton, U.K., in 2014 and the Ph.D. degree from Electronics and Telecommunication Engineering, Yıldız Technical University, Istanbul, Turkey, in 2020. During her doctorate degree, she worked as a graduate researcher at the Industrial, Environmental and Biometric Informatics Laboratory, University of Milan, Italy. Her research interests include machine learning, deep learning, representation learning and signal processing and spans across mainly security domain: biometrics and network security.

    Simone Vazzoler received a PhD in Mathematics from the University of Padova in 2013 and has over seven years of experience building and bringing to production Machine Learning and Artificial Intelligence models to solve the business problems of the companies he worked for, focusing on Natural Language Understanding, Computer Vision, and Anomaly Detection. His research interests span from Symplectic Geometry and Dynamical Systems to Deep Learning for Computer Vision and Anomaly Detection.