Article updated on
September 4, 2024

What actions are governments and corporations supposed to take against the risks of high-risk AI?

Governments and corporations are supposed to take several actions against the risks of high-risk AI. These actions include:

  • Data protection and transparency: High-risk AI systems must comply with data protection law and must be transparent about their operation. This means that developers must ensure that the data used to train and operate AI systems is adequately protected, and that users of AI systems can understand how the systems work and what data they collect.
  • Human oversight: High-risk AI systems must be designed in a way that allows for human oversight and intervention. This means that there must be mechanisms in place to allow humans to override the decisions of AI systems, and that humans must be able to monitor the operation of AI systems and intervene if necessary.
  • Prior conformity assessments: Developers of high-risk AI systems must have their systems assessed by The EU’s AI Office. The AI Office will monitor, supervise, and enforce the AI Act requirements on general purpose AI (GPAI) models and systems across the 27 EU Member States. This includes analyzing emerging unforeseen systemic risks stemming from GPAI development and deployment, as well as developing capabilities evaluations, conducting model evaluations and investigating incidents of potential infringement and non-compliance. To facilitate the compliance of GPAI model providers and consider their perspectives, the AI Office will produce voluntary codes of practice, adherence to which would create a presumption of conformity. This assessment will need to have been done before the systems can be marketed or put into use. This assessment is designed to ensure that the systems meet the requirements of the AIA and that they are not likely to pose a risk to safety or fundamental rights.
  • Post-market monitoring: Developers of high-risk AI systems must monitor the performance of their systems and must take corrective action if necessary. This means that developers must collect data on the operation of their systems and use this data to identify and address any issues.
  • Research and development: Governments and corporations should invest in research and development to improve the safety and reliability of AI systems. This research should focus on developing new methods for detecting and mitigating bias, and for ensuring that AI systems are accountable and transparent.

In addition to these specific countermeasures, governments and corporations should also take a number of general steps to promote responsible AI development and deployment. 

These steps include:

  • Developing clear ethical guidelines for AI development and deployment.
  • Supporting the development of a skilled workforce in AI.
  • Creating a culture of accountability and transparency in AI development.

By taking these steps, governments and corporations can help to ensure that AI is developed and deployed in a way that is safe, reliable, and beneficial to society.