Article updated on
September 4, 2024

What are examples of High-risk AI in the act?

The EU AI Act defines high-risk AI systems as AI systems that

  • Are used as a safety component or a product covered by EU laws in Annex I AND required to undergo a third-party conformity assessment under those Annex I laws; OR
  • those under Annex III use cases, except if:some text
    • the AI system performs a narrow procedural task;
    • improves the result of a previously completed human activity;
    • detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
    • performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.
  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement.

Providers whose AI system falls under the use cases in Annex III but believes it is not high-risk must document such an assessment before placing it on the market or putting it into service.