The new AI regulation – regulatory overview

The European Union has taken a significant step towards regulating artificial intelligence (AI) with the adoption of the new regulation laying down harmonised rules for artificial intelligence, AI Regulation, (AI Act) on May 21, 2024. The AI Regulation will apply immediately in all member states.

This legal framework is unique internationally and aims to create a safe, transparent and trustworthy environment for the use of AI technologies while promoting innovation. The regulation is a product regulation and does not refer to development activities.

The AI Regulation divides AI systems into different risk categories; the higher the risk, the higher the requirements.

Systems with unacceptable risk are prohibited, such as AI-based systems for social assessment of individuals by public authorities. High-risk systems are subject to strict regulations and include applications in areas such as healthcare, law enforcement and critical infrastructure. These systems require extensive security and transparency measures. Low-risk systems must meet certain transparency requirements, such as labelling AI-generated content. Low-risk systems are largely unregulated and include most AI applications currently available on the EU internal market, such as AI-based video games and spam filters.

Extensive requirements apply to high-risk AI systems. Suppliers must implement a risk management system to identify and minimize potential dangers at an early stage. Comprehensive technical documentation is required that describes the development, training and operation of the AI system. Suppliers must ensure that the data sets used for development and training are of high quality and do not contain any distortions. Users must be informed about the functionality and risks of the AI system. There are a wide range of other obligations, such as market surveillance and reporting obligations and the implementation of a conformity assessment and conformity marking.

Here are some examples of the different risk classes

Examples of prohibited AI systems

  • Use, placing on the market or operation of AI systems that are incompatible with the fundamental rights of the European Union.
  • Systems designed to manipulate people’s behavior, deliberately manipulated content in social media to pursue commercial or political goals.
  • Rating systems (social scoring) in which people’s personal characteristics are evaluated and conclusions are drawn from them.

Examples of high-risk systems

  • Diagnostic tools or AI components in medical devices
  • AI systems in human resources for the evaluation and selection of applicants
  • AI systems for evaluating people for life insurance
  • AI systems in critical infrastructures

Examples of systems with limited risk

  • Chats and virtual assistants that must be clearly labelled as AI systems
  • AI-powered social media filters to detect and block spam or inappropriate comments

Examples of systems with minimal risk

  • Spam filter
  • AI systems for personalizing advertising

Due to the high requirements, a careful delimitation of the AI to be classified as a high-risk system is necessary and it is advisable to deal with the obligations and the AI regulation at an early stage. AI systems that are not classified as high-risk do not pose a significant risk to the health, safety or fundamental rights of natural persons. This is the case if the AI systems are only intended to take on limited, procedural tasks, improve the results of previously completed human activities or recognize decision patterns and deviations without replacing or influencing prior human assessment. The prerequisite is that a proper human review has taken place.

In addition to these provider obligations, there are also obligations for operators, importers and dealers. In particular, there are numerous obligations for operators, i.e. persons or authorities who use an AI system that is classified as a high-risk system on their own responsibility, provided that these are not only used privately.

Artificial intelligence (AI) will play an increasingly important role in various areas of life. The AI Regulation has created an initial regulation, but its widespread use will also give rise to numerous new legal challenges and questions. 

Thanks to our careful preparation for the AI Regulation and our dedicated team of specialists – consisting of computer scientists, engineers, pharmacists and lawyers – we are already developing and implementing compliant solutions to offer you innovative and secure AI applications.

Our services include in-depth innovation consulting to help you optimally integrate AI into your business processes. We support you in implementing AI in PSP (Patient Support Programs) and HCP (Healthcare Professional Systems) systems, as well as in using AI for internal corporate applications and for field service. In addition, we offer customized training and education to ensure that your team is well versed in the latest AI developments and can use them effectively. Our expertise guarantees you a seamless integration of AI that increases your company's value and gives you a competitive advantage.

en_USEnglish