What are examples of specific AI applications that would be considered high-risk under the AIA
Remote biometric identification systems: These systems use AI to identify individuals based on their facial features, iris scans, or other biometric data. They are considered high-risk because of the potential for misuse, such as mass surveillance or identity theft;
AI-powered social scoring systems: These systems use AI to rank individuals based on their social and economic standing. They are considered high-risk because of the potential for discrimination and social exclusion;
AI-powered autonomous weapons: These systems use AI to select and engage targets without human intervention. They are considered unacceptable AI under the AIA because of the potential for indiscriminate killing;
AI-powered medical devices: These devices use AI to diagnose, treat, or monitor patients. They are considered high-risk because of the potential for harm if they make errors;
AI-powered online advertising systems: These systems use AI to target advertising to individuals based on their online behavior. They are considered high-risk because of the potential for manipulation and disinformation.