Article updated on

Let’s walk through some of the potential ways that you can go about implementing solutions to the requirements posed by the EU AI Act.

Implementing Risk management systems as required by the EU AI Act

In “Title III, Chapter 2, Article 9: Risk management (p.46)” of the EU AI Act, the implementation of a Risk Management system is specifically required for high-risk AI systems: “A risk management system shall be established, implemented, documented and maintained in relation to high risk AI systems” 

The simple fact of running your AI as a Machine Learning System, will help you better manage the risks and govern your data processes, and do it continuously, as a system. Your AI and ML projects are no longer run on some rogue servers somewhere - but are instead using a centralized platform that you can operate and manage with much greater confidence. It will allow for centralized controls and understanding of the risks that are currently decentralized and unknown.

Implementing data governance as required by the EU AI Act

In “Title III, Chapter 2, Article 10: Data governance (p.48)” of the EU AI Act, the implementation of a data governance system is specifically required for high-risk AI systems:

“Training, validation and testing data sets shall be subject to appropriate data governance and management practices.”

In any AI and ML project, there is a lot of data floating around. Very often, this data is quite important, privacy sensitive, and therefore confidential data that cannot be accessed without proper controls. A centralized Machine Learning System will allow you to implement standard governance practices on all your training, validation and testing data sets: the system will implement dataset version control, access control, meta-tagging, and lineage management, so that you know what happens to the datasets at all times, and can comply with The Act’s requirements.

Implementing documentation, record-keeping and reporting as required by the EU AI Act

In “Title III, Chapter 2, Article 11-12: Documentation, Record-keeping and Reporting (p.49)” of the EU AI Act, the implementation of documentation, record-keeping and reporting is specifically required for high-risk AI systems: “The technical documentation of a high risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to-date.”

The same Machine Learning System that enables your data governance,  will allow you to better document how the AI-based business process works, and report on it whenever needed. The system will come with all the required logs and record keeping enabled by default, so that you don’t have to count on the goodwill of a machine learning engineer to facilitate this. Sure - there will always be rogue initiatives, but at least you can count on the more high-profile systems to provide you with everything needed to comply with The Act. 

Implementing transparency, provisioning of information to users and human oversight as required by the EU AI Act

In “Title III, Chapter 2, Article 13-14: Transparency, provisioning of information to users and human oversight (p.50-51)” of the EU AI Act, the implementation of transparency, provisioning of information to users and human oversight is specifically required for high-risk AI systems. “High risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.”

When your Machine Learning Systems are based the right technology stack, you will not only have a lot of technical tools and APIs to make it easier to understand what these systems are doing, how they work, and how they are governed - but you will also get a great easy-to-use User Interface. Using this interface, it becomes much easier for all stakeholders to understand the details of a project, start to finish. 

Implementing accuracy, robustness and cybersecurity best practices as required by the EU AI Act

In “Title III, Chapter 2, Article 15: Accuracy, robustness, cybersecurity (p.51-52)” of the EU AI Act, the implementation of accuracy, robustness and cybersecurity best practices is specifically required for high-risk AI systems. “High risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle”

Users of Machine Learning Systems that use best of breed infrastructure for their AI and ML projects, can do so on the basis of a set of open, pre-configured and managed components that users can rely upon. These systems takes responsibility for the entire process, and support it as a product that can comply with the highest demands for accuracy, robustness and cybersecurity. These platforms can be run either in the cloud or on the user’s infrastructure according to their own and the industry’s best practices.

© Hopsworks 2024. All rights reserved. Various trademarks held by their respective owners.

Privacy Policy
Cookie Policy
Terms and Conditions