. Shaping the Future of Digital Trust with AI | BSI
Contact Us
Search Icon

Suggested region and language based on your location

    Your current region and language

    Elementary student using VR in classroom
    • Blog
      Digital Trust

    Shaping the Future of Digital Trust With AI

    Explore the transformative power of the EU AI Act in building digital trust whilst prioritizing societal welfare.

    As of 2022, AI adoption by global organizations has surged, with 56% of companies integrating AI into various functions. While AI offers unparalleled opportunities, there remains public uncertainty with regards to misinformation, trustworthiness, and fairness.

    In response to these challenges, regulators are acting, exemplified by the introduction of the EU AI Act.

    In this blog, we look at the significance of the Act and explore how adopting legislative guardrails contributes to fostering digital trust in AI.

    What is the EU AI Act?

    The EU AI Act is poised to become the pioneering law in AI regulation. It underscores the EU's dedication to establishing digital trust and prioritizing societal well-being.

    The Act outlines a comprehensive framework that addresses ethical, performance, and transparency requirements for placing an AI system on the market, all while detailing provisions to safeguard fundamental rights.

    Its focus on human-centric and ethical AI aims to ensure responsible AI deployment across Europe. The Act emphasizes the use of risk-management systems; promoting proactive evaluation and mitigation of potential risks; promoting proactive evaluation and mitigation of potential risks.

    There are four defined risk categories for AI systems within the Act. Each category has a set of level of scrutiny and obligations. They are as follows:

    • Unacceptable or Prohibited AI Systems.
    • High-risk AI Systems.
    • AI systems that present limited risks.
    • AI systems that are considered to present no risk.

    Aim of the EU AI Act

    The EU AI Act is designed to promote both innovation and regulation. By abiding by a systematic and predictable regulatory framework, the Act aims to foster responsible AI development and deployment, thereby unlocking potential economic and societal benefits.

    Simultaneously, the Act emphasizes the protection of fundamental rights through comprehensive rules that prevent encroachment on users’ welfare and well-being.  In essence, the EU AI Act underscores the parallel roles of technological advancement and risk management, aiming to mitigate potential risks while supporting innovation.

    High-risk AI systems

    High risk AI systems are mandated to comply with specific stipulations from the AI Act and increased scrutiny. To identify such a system there are number of references within the Act for what may be considered high risk. For example, AI systems that are incorporated in medical devices are considered high risk per Annex II of the Act.

    Implementing the AI Act

    On June 14, 2023, the European Parliament adopted its negotiating position on the AI Act. This position along with the Council's position will be the subject of trilogue between the EU Parliament, EU Council and the European Commission.

    The AI Act introduces the availability "sandboxes" hosted by different national entities for the development, testing, and validation of AI systems. In 2022, a pilot sandbox has been initiated by the Spanish government. Additionally, the Act provides a number of provisions for small and medium sized enterprises to alleviate the burden of the specific regulatory requirements.

    Why regulation matters for AI

    AI is already deeply integrated into our digital interactions and holds the power to shape the information we encounter and access online. This impact can range from algorithms on social media to personalized advertisements and online banking experiences.

    This legislative text for AI will play a pivotal role in guiding its evolution while ensuring user protections. Similar to the General Data Protection Regulation (GDPR) that became a worldwide standard in 2018, the EU AI Act has the potential to serve as a global benchmark.

    As AI becomes more prevalent in our lives, BSI is vigilantly observing the unfolding effects of the upcoming officiation of the EU AI Act to become a Regulation. This legislative text will not only influence the regulation of such systems but will support the development and adoption of best practice standards to establish the essential bedrock of digital trust.

    This trust is vital for the widespread acceptance and positive use of AI systems in society.