The European Union is considering banning the use of AI for a number of purposes, including mass surveillance and social credit scores.
It comes according to a leaked proposal circulating online, first published by Politico, ahead of the expected official announcement next week.
If the draft proposal were adopted, the European Union could impose a strong stance on specific AI applications, setting it apart from the United States and China.
Some use cases are monitored in a manner similar to the European Union Digital Privacy Regulations under GDPR legislation.
Member states may be required to create scoreboards to test and validate high-risk AI systems.
Companies that develop or sell AI programs banned in the European Union, including those elsewhere in the world, can be fined up to 4 percent of their global revenues.
The draft regulations include:
- Intelligence systems block random surveillance, including systems that track individuals directly in physical environments or collect data from other sources.
- Banning intelligence systems that create social credit scores, which means judging someone’s trustworthiness based on social behavior or expected personality traits.
- A special permit to use intelligence systems for remote identification such as facial recognition in public places.
- Notifications required when people interact with intelligence systems unless this is clear from the circumstances and context of use.
- New oversight of high-risk intelligence systems, including those that pose a direct threat to safety, such as: self-driving cars, and those that have a high chance of affecting someone’s livelihood, such as those used in hiring, court decisions, and credit history.
- Evaluating high-risk intelligence systems before putting them into service, including ensuring that these systems are interpretable to human supervisors and that they are trained on high-quality datasets tested for bias.
- The creation of the European Artificial Intelligence Council, made up of representatives from each country, to help the committee identify which intelligence systems are considered high-risk and recommend changes to the bans.