Article updated on

What countermeasures are governments and corporations supposed to take against the risks of high-risk AI?

Governments and corporations are supposed to take several countermeasures against the risks of high-risk AI. These countermeasures include:

  • Data protection and transparency: High-risk AI systems must comply with data protection law and must be transparent about their operation. This means that developers must ensure that the data used to train and operate AI systems is adequately protected, and that users of AI systems can understand how the systems work and what data they collect.
  • Human oversight: High-risk AI systems must be designed in a way that allows for human oversight and intervention. This means that there must be mechanisms in place to allow humans to override the decisions of AI systems, and that humans must be able to monitor the operation of AI systems and intervene if necessary.
  • Prior conformity assessments: Developers of high-risk AI systems must have their systems assessed by an independent conformity assessment body before they can be marketed or put into use. This assessment is designed to ensure that the systems meet the requirements of the AIA and that they are not likely to pose a risk to safety or fundamental rights.
  • Post-market monitoring: Developers of high-risk AI systems must monitor the performance of their systems and must take corrective action if necessary. This means that developers must collect data on the operation of their systems and use this data to identify and address any issues.
  • Research and development: Governments and corporations should invest in research and development to improve the safety and reliability of AI systems. This research should focus on developing new methods for detecting and mitigating bias, and for ensuring that AI systems are accountable and transparent.

In addition to these specific countermeasures, governments and corporations should also take a number of general steps to promote responsible AI development and deployment. 

These steps include:

  • Developing clear ethical guidelines for AI development and deployment.
  • Supporting the development of a skilled workforce in AI.
  • Creating a culture of accountability and transparency in AI development.

By taking these steps, governments and corporations can help to ensure that AI is developed and deployed in a way that is safe, reliable, and beneficial to society.

© Hopsworks 2024. All rights reserved. Various trademarks held by their respective owners.

Privacy Policy
Cookie Policy
Terms and Conditions