No items found.
arrow back
Back to Blog
Rik Van Bruggen
link to linkedin
VP EMEA
Article updated on

The Guardrails for High Risk AI Required by the EU AI Act

What Machine Learning Systems do for the High Risk AI Applications in the EU AI Act
December 19, 2023
13
Read
Rik Van Bruggen
Rik Van Bruggenlink to linkedin
VP EMEA
Hopsworks

TL;DR

Last week we published a much-read article, covering the meaning and implications of the EU’s AI Act, specifically for the so-called “High-risk” applications of AI. This has, therefore, spurred the all important question: what are we, Hopsworks, going to DO about it, and how are we going to do it?! Let’s explore that in this article, and provide some initial thoughts and guidance.

The Scope: High Risk AI

Last week we published a much-read article, covering the meaning and implications of the EU’s AI Act, specifically for the so-called “High-risk” applications of AI. Before we continue to elaborate on this, let’s narrow the scope of our discussion: we are only going to consider, for now at least, “High Risk” AI applications, as they are referred to in the Annexes to the Act. We will not focus on the other categories, as we believe that the high-risk AI applications are going to be most impacted by the Act. They are also going to be the ones that corporate and government organizations that are currently putting Machine Learning and Artificial Intelligence to work, are going to have to worry about.

If your AI system is considered “High Risk AI”, the consequence will be that your AI (or machine learning) system will have to be assessed before it can be available on the market. The definition of High Risk AI provided by the EU AI Act is quite clear:

1. AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

2. AI systems falling into eight specific areas that will have to be registered in an EU database:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits. Note that this includes, but is not limited to, for example, AI systems intended to be used to evaluate the creditworthiness of natural persons, etc. - something that every single financial institution does.
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

For these AI systems, the operators of these systems will need to put in place specific guardrails that are going to limit and manage the actual risk of these applications.

EU AI Act Risk-based classification of AI systems
Figure 1: EU AI Act Risk-based classification of AI systems

Guardrails for AI: Good News for All!

We have all witnessed the boom in AI and machine learning over the past 12-18 months. Where these technologies used to be on the fringe of a hypothetical company’s operational environment, it is now quickly becoming more mainstream - most likely because of the demystification that has come with the launch of ChatGPT and related applications. Companies are rapidly gaining an understanding of the power of these ML and AI systems, not just in the context of Large Language Models (LLMs), but also related to other use cases where machine-learning-based predictions can play a powerful role.

This means that in a very short period of time, the technology has moved from fringe, skunkworks-style projects in the darkest corner of the data scientist’s sunlight-deprived offices, to much more prominent use cases. And therefore, the rules of the game have changed: not only do these projects now need professional operational platforms, but they will also need to implement governance and compliance rules more stringently - like the ones of the EU AI Act that have now been published. They cannot be hacked together anymore - they need proper guidance.

It is easy to think of these rules as negative, limitative “paperwork from Brussels” that would limit growth and innovation. But that is not true: think of these rules as guardrails. They will ensure that we can safely and positively assume the risks that AI entails, without having to worry about them too much, as there will be guardrails put in place. We can focus on many more, interesting, productivity enhancing questions - and let the guardrails guard us. 

SIDENOTE: Hopsworks is a European-bred platform for building and operating provides tailored support for high risk machine learning systems, with the most flexible and strongest Enterprise grade security model (dynamic role-based access control) of any AI platform available on-premises today.

ML Systems Provide the Guardrails for AI

At Hopsworks, we have been building Machine Learning Systems for a long time, with love from Sweden. We like to emphasize the fact that these systems are not just experiments or coding playgrounds - they are actual production systems that transform data sources into AI/ML powered products and services.  We have written about this extensively on our blog on ‘MLOps to ML Systems’, and we summarize it in a comprehensive diagram:

 Overview of a Machine Learning System
Figure 2: Overview of a Machine Learning (ML) System

When you understand and buy into the Mental Map for building these types of ML and AI systems, then you also quickly understand that this will largely coincide with your map to comply with the EU AI Act, even for some of the higher-risk applications. Deploying your Machine Learning and Artificial Intelligence applications on a ML platform immediately gives you most if not all of the requirements that the EU Act suggests and imposes, and does away with many of the concerns. 

Hopsworks: the European ML Platform for ML Systems

Let’s just explain some of the details here. In the EU AI Act, there are some very specific requirements outlined for the High Risk AI systems. Let’s go through them line by line, paragraph by paragraph, and just evaluate how an ML Platform like Hopsworks helps you comply with the Act. Note that we are referring to specific chapters and paragraphs that you can find in The Act document.

Title III, Chapter 2, Article 9: Risk Management (p.46): 

“A risk management system shall be established, implemented, documented and maintained in relation to high risk AI systems” 

The simple fact of running your AI as a Machine Learning System on Hopsworks, will help you better manage the risks and govern your data processes, and do it continuously, as a system. Your AI and ML projects are no longer run on some rogue servers somewhere - but are instead using a centralized platform that you can operate and manage with much greater confidence. It will allow for centralized controls and understanding of the risks that are currently decentralized and unknown.

Title III, Chapter 2, Article 10: Data Governance (p.48)

“Training, validation and testing data sets shall be subject to appropriate data governance and management practices.”

In any AI and ML project, there is a lot of data floating around. Very often, this data is quite important, privacy sensitive, and therefore confidential data that cannot be accessed without proper controls. A centralized Machine Learning System based on Hopsworks, will allow you to implement standard governance practices on all your training, validation and testing data sets: Hopsworks always implements dataset version control, access control, meta-tagging, and lineage management, so that you know what happens to the datasets at all times, and can comply with The Act’s requirements.

Title III, Chapter 2, Article 11-12: Documentation, Record-keeping and Reporting (p.49)

“The technical documentation of a high risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to-date.”

The same Machine Learning System on Hopsworks that enables your data governance,  will allow you to better document how the AI-based business process works, and report on it whenever needed. Hopsworks comes with all the required logs and record keeping enabled by default, so that you don’t have to count on the goodwill of a machine learning engineer to facilitate this. Sure - there will always be rogue initiatives, but at least you can count on the more high-profile systems to provide you with everything needed to comply with The Act. 

Title III, Chapter 2, Article 13-14: Transparency, Provisioning of Information to Users and Human Oversight (p.50-51)

“High risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.”

When your Machine Learning Systems are based on Hopworks, you will not only have a lot of technical tools and APIs to make it easier to understand what these systems are doing, how they work, and how they are governed - but you will also get a great easy-to-use User Interface. Using this interface, it becomes much easier for all stakeholders to understand the details of a project, start to finish. 

Title III, Chapter 2, Article 15: Accuracy, Robustness, Cybersecurity (p.51-52)

“High risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle”

Hopsworks aims to provide users of Machine Learning Systems with the best of breed infrastructure for their AI and ML projects, and do so on the basis of a set of open, pre-configured and managed components that our users can rely upon. Hopsworks takes responsibility for the entire system, and supports it as a commercial product that can comply with the highest demands for accuracy, robustness and cybersecurity. Our platform can be run either in the cloud or on your infrastructure according to your own and the industry’s  best practices.

Conclusion: the AI Act is an Opportunity for Us All!

We hope to have given you a very specific understanding of how a Machine Learning System built on a Machine Learning Platform like Hopsworks, centered around its market leading feature store, makes compliance with the AI Act an immediate and real possibility that governments and companies alike should look forward to. 

As such, we at Hopsworks think the EU AI Act is a true opportunity, and are massively looking forward to helping many clients seize the possibilities that the technology offers, in a way that meets the EU’s requirements, both in spirit and in practice. The sky's the limit!

Learn More About the EU AI Act for High Risk AI Systems

Read more about the EU AI Act for High Risk AI Systems through these sources:

References

© Hopsworks 2024. All rights reserved. Various trademarks held by their respective owners.

Privacy Policy
Cookie Policy
Terms and Conditions