Civil society calls on the EU to draw limits on surveillance technology in the Artificial Intelligence Act.
As the European Union inches closer to finalising the Artificial Intelligence (AI) Act, civil society organisations are concerned about the unchecked, discriminatory and mass surveillance due to the use of AI in policing and migration control. The AI Act is an opportunity to prevent such harms by drawing limits on harmful use of AI technology.
EAPN Europe and 115 civil society organisations are asking EU policymakers to regulate the use of AI systems by law enforcement, migration control and national security authorities throughout Europe. The use of AI by these authorities is dangerous for people’s fundamental rights, including the right to freedom of assembly, liberty, the right to asylum, privacy and data protection, the right to social protection, and non-discrimination.
These AI systems reinforce the over-policing, disproportionate surveillance, detention and imprisonment of structurally discriminated against groups.
In order to prevent harm from the use of AI in policing, migration control and national security, the EU AI Act must:
- Include prohibitions on AI for uses that pose an unacceptable risk for fundamental rights. This includes a ban on different forms of biometric surveillance, predictive policing, and harmful uses of AI in the migration context.
- Provide public transparency and oversight when police, migration and national security agencies use ‘high-risk’ AI. These authorities must to register high risk uses in the EU-wide AI database.
- Ensure that the AI Act properly regulates the uses of AI in policing, migration and national security that pose risk to human rights, specifically the full list of AI use in migration control, and ensuring that there is no exception for national security.
Photo by EDRi.