The European Artificial Intelligence (AI) software market is expected to experience 1270% growth in revenue in the coming years, with funding increasing from around 2.09 billion U.S. dollars in 2018 to an expected 26.52 billion by 2025. AI has also powered Facial Recognition Technologies (FRT) to develop. In some Member States, FRTs are used by law enforcement authorities to ensure public safety through collecting unique biometric data connected to individual facial expressions. For example, the Czech Republic is increasing the use of FRTs in Prague international airport and Hungary plans to set up 35 000 cameras across the country to maintain public order.
Mass surveillance is an increasing but still invisible aspect of day-to-day life. While people go to work, private and public events or search for something on the web, data is unknowingly being transferred to governmental institutions.
This issue sparks a debate regarding the potential violations of the human right to privacy, protected by the Charter of Fundamental Rights of the EU. Basic freedom may be violated if the data needed for national security purposes is not stored in accordance with the EU laws. Recent examples of such violations happened in Belgium, France and the UK where national laws allowed agencies to acquire the personal data of internet and phone users.
Taking into account all of the facts, a question remains: how can the balance between the policies safeguarding the public and technologies be achieved in the near future?
Take your first steps into understanding the topic by following these links:
The biggest challenge of the topic is finding a balance between privacy and safety, as well as the ethical use of data. A common perspective of mass surveillance is considering it as an invasion of privacy, with it being a tool for governments to control their citizens. Technologies are viewed as a threat to freedom and safety while forgetting about mass surveillance as a measure to strengthen national security. Moreover, about half of Europeans are not aware of the governments’ collection of data for security reasons.
Another issue to consider is the possible leak of data. FRTs open security possibilities with biometric authentication, surveillance systems, border management, as well as public administration. With that come the questions of whether this information is stored properly and who can access it. A well-known data leak in 2017 occurred when the personal information of people working undercover for the Swedish police and the Swedish security service had been acquired by International Business Machines Corporation (IBM) branches in Eastern Europe.
Moreover, a common type of mass surveillance is performed by private sector companies. Through offering loyalty programs, people are encouraged to exchange their private information for some discounted clothes, flight tickets, electronics and even groceries. This data is compelling to governments as there are accusations of cooperation between businesses and national security agencies. Though the EU is taking steps towards a comprehensive framework on data sharing between private companies and governments, concrete steps are needed to avoid a data breach.
Facial recognition poses concerns in law enforcement since it can lead to misidentification and raise levels of safety issues. The main concern of this is the discrimination caused by the errors and biases within the FRTs. Precisely, FRTs have higher rates of inaccurate recognition of women, LGBTQIA+ and people of colour. Less accuracy of FRTs means higher rates of production of biased results, especially if there is no human review of machine-generated results.
For more information look here:
One of the focuses of the European Union Agency for Fundamental Rights (FRA) is the use of “live facial recognition technology” (LFRT) and mass surveillance. At the end of 2019, FRA published a report analysing current legislation and policy relating to the fundamental rights challenges when talking about facial recognition technology. According to a large survey conducted by FRA in the United Kingdom, only 9% of people feel completely uncomfortable when facial recognition is used for policing purposes. However, 24% do not feel comfortable with the use of facial recognition in public transport and 37% at their workplace. To sum up, while people generally tend to feel more comfortable with the use of facial recognition technologies for policing purposes, many are not happy with the use of these technologies in everyday life. Since then, FRA key findings have been taken into account in several National and European policy initiatives, such as the Czech Republic (2019) National Artificial Intelligence Strategy.
The European Commission has recognised the rising challenges of AI. By releasing several initiatives at the start of 2020, such as the White Paper on Artificial Intelligence and “A Strategy for Europe – Fit for the Digital Age”, the European Commission has set clear goals to:
place Europe ahead of technological developments and encourage the uptake of AI by the public and private sectors;
prepare for socio-economic changes brought about by AI;
ensure an appropriate ethical and legal framework.
The European Digital Rights (EDRi) network consists of 44 Non-Governmental Organisations (NGOs) all over the EU and their main mission is to raise awareness of the importance of European policy making in the digital environment.
For more information about institutions relating to the topic look into these links:
The European Commission's Strategy on Artificial Intelligence urges Member States to build their national AI plans and hopes to ensure appropriate ethical and legal frameworks for AI advancement. AI Watch is a European Commission-initiated project, that notably collaborates with the OECD.ai, for the purpose of monitoring data collection and analysis of National strategies on Artificial Intelligence in the EU Member States.
On 17 February 2021, EDRi with a coalition of 44 human rights and social justice groups launched a unique, officially-recognised EU petition, called Reclaim Your Face. Their aim is to demand a ban on the use of harmful AI such as biometric mass surveillance by launching a European Citizens’ Initiative (ECI) and intending to collect at least 1 million signatures in no less than 7 EU countries in the following year. Moreover, the Committee of Ministers of the Council of Europe adopted several recommendations by EDRi members, such as Access Now and the Wikimedia Foundation, to consider the human rights impacts of algorithmic systems.
The European AI Fund is a philanthropic project aimed at influencing AI's path in Europe. Several well-known companies, such as Bosch Foundation, Luminate, Mozilla Foundation, Oak Foundation and others are giving financial assistance to foster a political and technical network of European public interest groups and civil society organizations, focused on a variety of players and a plurality of priorities that serve society as a whole.
As of May 2019, the Czech Republic has released its National Strategy for Artificial Intelligence to support the development of ethical and secure AI. Financial measures have been taken by the Czech Government as they have allocated a total of CZK 9.5 million for research teams in AI in the coming years. However, the Czech Republic is one of the 14 countries that urged the Commission in October 2020 to push for as little regulation as possible in the AI field.
On May 19 2020, the German Constitutional Court recognised and restricted the mass foreign citizen surveillance by the Federal Intelligence Service (Bundesnachrichtendienst - BND), after an EDRi member Society for Civil Rights (Gesellschaft für Freiheitsrechte - GFF) filed a constitutional complaint against BND law. The law authorised BND to wiretap internet traffic worldwide and to track certain people or organisations without cause, ignoring the freedom of open and private communication.
It is in the interest of the European Union and its Member States to create safe and efficient provisions for the rapid development of AI. The wellbeing of citizens should always be taken into account, as mass surveillance and facial recognition technology expand. Bearing in mind the privacy of citizens, the question remains: "How can AI be used for safety and security reasons?
Links for further research:
After reading the Topic Overview, we hope you understand that this subject is far from being exhausting. On one side, further research is required to further understand the technology's practical applications and future growth. On the other side, serious legal problems such as mass surveillance and facial recognition technologies must be discussed. Apart from the debate, it is also a question about when and how these breakthroughs can be conveyed to the general public.