top of page

Risk Assessment and the AI Act

What is it?

When developing or using artificial intelligence, it is crucial to assess the risk that something could go wrong. AI can be powerful and efficient — but also unpredictable, biased, or outright dangerous if not used thoughtfully. That is why all AI projects should be accompanied by a systematic risk assessment.

The EU’s new AI regulation, known as the AI Act, specifically requires organizations to address risk. The regulation classifies AI systems into four risk levels, which can be used both legally and practically to analyze and improve systems — whether or not one is directly subject to the law.

Examples:

The four risk levels under the AI Act are as follows:

  • Unacceptable risk: AI systems considered fundamentally dangerous or harmful and therefore completely prohibited. This includes, for example, social scoring systems like those used in China, where citizens are monitored and rewarded or punished based on their behavior, or AI systems that manipulate children or vulnerable groups.


  • High risk: AI systems that have a major impact on people’s rights and life opportunities but can be used under strict conditions. Examples include facial recognition in public spaces, AI in recruitment, or algorithms used in healthcare or the judiciary. These require documentation, transparency, and oversight.


  • Limited risk: AI systems that interact with humans and may influence behavior but do not make critical decisions. This includes chatbots and recommendation systems. Typically, such systems must be clearly disclosed, and users must have the option to opt out.


  • Minimal risk: AI systems with low impact that do not require special regulation. Examples include spell checkers, automatic image filters, or AI that sorts documents internally within an organization.

What to consider?
  • Even if you are not yet legally obligated to comply with all parts of the AI Act (which is still being rolled out at the time of writing), the regulation’s risk model is a powerful tool for internal assessment. It can help you ask: How risky is the AI system we are building or using? And: What can we do to reduce that risk?


  • Consider how your AI solutions affect people — especially those in vulnerable situations. The higher the risk, the greater the need for documentation, transparency, and human oversight. It is not just about legal compliance — it is about responsible technology development that accounts for errors, biases, and unintended consequences.


Understanding and applying the risk levels from the AI Act is a strong way to ensure that AI creates value without causing harm.

Peter Svarre - petersvarre.dk - Nørrebrogade 110A, 1.tv, 2200 København N, tlf: 40409492, mail: peter@petersvarre.dk

bottom of page