Chairperson: Miguel Bustorff (BE/PT)

Special Committee on Artificial Intelligence in a Digital Age (AIDA)

Responsible Intelligence: Technologies using artificial intelligence (AI) are playing a key role in the digital and green transformation by fostering innovation, competitiveness, and sustainable economic growth. Considering recent ethical concerns about the usage of data in AI, leading to bias and compromising citizens' fundamental rights, how can the EU continue to develop its AI ecosystem while ensuring the technologies remain safe, transparent, and undiscriminating?

Topic at a glance
Artificial Intelligence is one of the fastest-growing technologies globally. The forecasted AI annual growth rate between 2020 and 2027 is 33.2%. AI can have huge benefits on society such as fighting climate change and boosting the green transition, efficiency, innovation and sustainable economic growth.

However, many companies are still reluctant to implement AI systems due to a lack of investment, skills, and fear of the unknown, and ethical concerns have been on the rise with the protection of data and fundamental rights being the main issue when approaching AI systems and their impact.

The European Union is one of the first global actors to conceive and plan a major AI strategy focused on trust and excellence in an attempt to boost AI benefits throughout its members, protect fundamental rights, and reduce the technology’s risks.

With this approach in place, it is now crucial to implement the next steps fairly across the Member States and to stay critical of the new European approach to AI to ensure that EU citizens and economy can fully benefit from an ethical and safe Artificial Intelligence ecosystem.
CORE CONCEPTS
An Artificial Intelligence system (AI system) is defined by the European Commission in its 2021 regulation as a “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The techniques listed in Annex I are the following:
  • Machine learning approaches, including supervised, unsupervised, and reinforcement learning, using a wide variety of methods, including deep learning (a machine learning technique that teaches computers to do what comes naturally to humans);
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference, and deductive engines, (symbolic) reasoning, and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimisation methods.

Data quality is a crucial concept when discussing AI systems because it greatly affects their effectiveness and reliability. It is defined by the Data Management Body of Knowledge as “the planning, implementation, and control of activities that apply quality management techniques to data, in order to assure it is fit for consumption and meets the needs of data consumers.” (1) and depends on a large list of factors including Accuracy, Completeness, Consistency, Integrity, Reasonability, Timeliness, Uniqueness/Deduplication, Validity, and Accessibility.

Using low-quality, outdated or incomplete data can lead to a system generating bias, purely incorrect conclusions, or false outcomes.

In recent years, data quality has become a more serious concern in assessing AI systems. This trend can be observed in policy documents and newly published papers.

The Fundamental rights of European citizens are listed in the EU Charter of fundamental rights. They are summarised in the Treaty of the EU through the following statement: “The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail”. (2)

The charter of fundamental rights is a document aimed at protecting and promoting citizens’ rights and freedoms through the evolution of society, social progress, and scientific and technological developments.

Several reports and studies have highlighted that the use of certain AI systems could go against fundamental rights. Examples of threatened rights are the right to non-discrimination, access to fair trial and effective remedies and protection of personal data. There are more rights that could be in danger, and this will be further discussed in the Challenges/Main Conflicts section.

____
  1. What Is Data Quality? - DATAVERSITY
  2. Why do we need the Charter? | European Commission (europa.eu)
Key ACTORS
The European Commission has been monitoring the advancements of AI in the EU in recent years. As the only institution capable of proposing new legislation at a European level, it has also worked on a set of rules and actions for AI that are yet to be implemented in the Member States.
The current approach described by the Commission is centred on excellence and trust in AI, and aims to boost research and industrial capacity as well as ensuring fundamental rights. This approach is reflected in the EU’s proposal for an AI regulation, the Coordinated Plan on Artificial Intelligence and the Communication on Fostering a European approach to Artificial Intelligence, which are the three components of the major AI package published by the Commission in April 2021.

The European Union Agency for Fundamental Rights (FRA) is an independent centre of reference and excellence for promoting and protecting human rights in the EU. Its goal is to defend EU citizens’ fundamental rights and raise awareness of those rights.

The FRA operates by collecting and analysing law and data and identifying trends, in order to provide advice on rights to policy and law makers at the EU, national, and local level. In December 2020, the FRA published a report on AI and fundamental rights. This report explores how fundamental rights are taken into account in the usage and development of AI applications. Another relevant report published in June 2019 focuses on mitigating bias and error in Artificial Intelligence.

Companies play a big role in the development of the AI ecosystem. There are two types of relevant companies: the ones who focus on creating AI systems, and the ones who use those systems. Those roles can sometimes be combined. It is important to define what the responsibilities of these companies are when it comes to the applications of AI.

In 2020, there were 769 AI-focused start-ups in the EU which represents 22% of the global number. In that same year, it was recorded that only 7% of companies within the EU (3) use Artificial Intelligence, with a large disparity between EU Member States, as shown in the graph below.

____
3. This number only takes into account companies with at least 10 employees and excludes the financial sector.
EU citizens can also be users of AI systems or be subject to those systems, and sometimes, without knowing it. It is important to ensure that AI benefits citizens and communities, according to European values and principles. In a report from the European Parliament, AI systems present risks such as spying, citizens monitoring, and illegal information acquisition if used by the wrong hands.

With this topic, we are mainly interested in how AI systems affect citizens and their fundamental rights.
Benefits and Opportunities
Although AI may raise important concerns in today’s age, it is important to remember the benefits and opportunities that AI systems can bring to modern society because they constitute the core reasons why the EU wants to move forward with these technologies. Below are the main advantages of AI stated by the European Parliament.

Benefits for people
AI has a big potential to improve people’s day-to-day lives by facilitating access to information, education, and training, as well as by providing higher quality healthcare, safer transport systems, and more individually tailored products and services. The effects of AI on certain workplaces can also reduce risks, by letting robots undertake more dangerous parts of the labour.

Opportunities for businesses
In 2020, the Parliament’s Think Tank estimated that AI can increase labour productivity up to 37%, while also improving machine maintenance and customer service, boosting sales as well as saving energy. The use of AI can also enable the development of a new generation of products and services, which reinforces certain industries and allows for new ones to be created.

Opportunities in public services
In the public sector, AI can reduce costs and offer new possibilities regarding education, public transport, energy and waste management, as well as making products more sustainable. It was estimated in 2020 that AI could help reduce greenhouse emissions by 4% by 2030.

Security and Safety
AI can help in crime prevention and the criminal justice system. It is already being used to detect crimes such as fraud and money laundering, and in the future, it is expected to play a role in fighting illicit activities such as transportation of illegal goods, terrorist activities, and human trafficking. Its ability to quickly process massive data sets and spot anomalies or risk factors is what makes AI such a powerful tool in this domain. It could even be used in military matters for defence and attack strategies in hacking and cyberwarfare.
Main Challenges
After explaining some of the main opportunities that a technology such as AI can bring to society, it is crucial to discuss its shortcomings and the challenges that implementing AI in the EU has risen.

Data Quality and Fundamental Rights
With AI being present in a range of sectors and areas in life that directly affect citizens, notably education, work, social care, health, and law enforcement, there are many risks concerning the preservation of Fundamental Rights. It is mainly due to the fact that these rights are not always thoroughly taken into consideration when designing the algorithms and models that dictate the functioning of AI systems and acquiring the necessary data. In fact, efficiency is often the priority since the use of AI is a tool to reduce costs and optimise processes for many companies and institutions.

Using low-quality data presents a direct threat to citizens' rights in ways that can sometimes seem obvious, but not always. First of all, using non-representative or biased data can lead to unequal treatment of people based on characteristics such as sex, age, disability, sexual orientation, ethnic origin, and religion, which is against the right to non-discrimination. This can happen in cases where there are structural differences in the training data for such characteristics. A good example of this is a hiring algorithm developed by Amazon that was gender-biased, favouring men. This is an example of AI’s effects on the right to gender equality. Another bias can be found in face recognition technologies when used by non-white people, as shown in the graph below.
These statistics collected in 2018 immediately sparked responses from companies such as IBM and Microsoft who declared the adoption of further steps in data collection and testing to reduce bias towards certain demographics. Other examples of racial discrimination from algorithms can be found for example in law enforcement technologies. In the US, black people are overrepresented in mug-shots data, used by algorithms to make predictions of crime. This results in a loop with racist policing strategies that lead to a disproportionate number of arrests of black people and feeding back into the data that is used for such predictions.

Moreover, some economic and social rights can also be affected by discrimination when it comes to access to employment, social services, and healthcare because of AI systems. Some research indicates that using such algorithms in those fields could have a negative impact, especially on poor people.
As for AI used in public administration, a concern on the right to good administration can be raised. Although automation should allow for a more efficient, impartial, and fair treatment of citizens’ requests, this right also includes the obligation for the administration to give reasons for its decisions which can be a challenge within systems using AI due to transparency issues.

Finally, AI can have a strong impact on issues related to respect for private and family life and the protection of personal data. In fact, collecting and manipulating one’s data and information with the objective of training an algorithm should be subject to some rules. However, it is unclear how the General Data Protection Regulation (GDPR) rules affect data used in AI systems, since the specification of “data accuracy” in the GDPR can be interpreted as “data quality” for algorithms training but that is unsure.

Transparency in AI
AI has well-documented transparency problems. This is mainly due to its technical complexities because complex tasks and objectives tackled by AI systems often require complex and difficult to interpret models. In these models, it is difficult to tell what the relationship between the input (the information given to the system) and the output (the results obtained by the system based on the initial information) is. This is reinforced by the effects of Machine Learning (ML), which allows the system to decide somewhat by itself, based on data used for training, what factors are the most important and determinant for obtaining a result. With sometimes millions of data points and hundreds of different features, the design of ML algorithms can create a black-box effect (4) which makes it difficult to understand even for the creators of the algorithm. This in turn reduces humans’ control over the functioning of that algorithm hence making them unable to prevent some of the biases described previously. The image below shows an overview of all the different types of biases that can cause a biased output in ML systems. It is often very complicated to identify where the errors come from.

–––––
4. This metaphor is used to qualify a system from which we can only see the input and output, but it is hard to understand/see the process.
There exists a set of practices called ML Ops that have been defined with the goal of deploying and maintaining ML models in production reliably and efficiently. Adopting these practices should bring more transparency into the systems but they are not always used in the industry.

75% of consumers demand more transparency from AI-powered tools and it is clear that this lack of transparency is negative for the public trust in AI systems.

Additionally, for citizens, being unable to understand the system that dictated a certain decision or caused a situation can result in the incapacity to contest or challenge these outcomes adequately.
Policy Approach
Despite lacking in financial investment towards AI in comparison to other global powers, the EU excels when it comes to research in this field and has a large pool of digital talent to build on. With such, the EU recognises the importance of developing its AI ecosystem, and in 2020 the Commission has made AI top priority for the following five years, as well as a principal building block of the Digital Decade strategy.

In April 2019, an EU expert group published the ethics and guidelines for trustworthy AI. This report defines seven key requirements for AI to be deemed trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability.

Then in April 2021, the Commission published an AI package with three major components: EU’s proposal for an AI regulation, the Coordinated Plan on Artificial Intelligence, and the Communication on Fostering a European approach to Artificial Intelligence. Understanding these three game-changers is crucial to understanding the current state of AI legislation within the EU.

Proposal for an AI regulation (Artificial Intelligence Act)
This proposal for regulation from the European Parliament aims at laying down harmonised rules for the EU in regard to AI. First, the Act updates the definition of Artificial Intelligence, as seen in the second section of this Topic Overview. This new definition brings legal clarity to the scope of the regulation by enumerating all computer sciences techniques that will be regulated. As technologies evolve, the list can be updated. Some critics deem this definition too broad and confusing for developers.

It is important to note that the Act presents a risk-based approach and that AI systems will be subject to more or less regulations depending on their risk category as shown in the pyramid below.
Most AI applications that impact Fundamental Rights are judged “High risk”. Products in this category are required to go through an assessment of EU safety and health standards in order to obtain the CE marking. Only products with CE marking can enter the European market. The standard requirements follow the same principles as the key requirements from the 2019 ethics and guidelines (Data and Data Governance, Transparency for users, Human Oversight, Accuracy, Robustness and Cybersecurity, Traceability and Auditability).
A comprehensive description of all other restrictions from the other risk categories can be found in this article.

Violations of data governance requirements or noncompliance with the unacceptable risk prohibitions can lead to fines of up to 30 million euros or 6 percent of a business’ worldwide annual turnover, whichever is higher. Noncompliance with other provisions of the AI Act can carry a penalty of up to 20 million euros or 4 percent of worldwide annual turnover.

The potential international impact of the AI Act has been compared to that of the GDPR policies, in a way that European standards became worldwide standards due to global companies having to adapt to European regulations and deciding to adopt single global privacy policies. That could also be the case with this AI Act, since it states that it covers all providers of AI systems in the EU, regardless of where those providers are located.

Coordinated Plan on Artificial Intelligence

The Coordinated Plan from the Commission on AI aims at collaboration with the Member States. The goal is to accelerate, act, and align on AI matters throughout the Union, in order to deploy AI investments in the European economy and implement strategies and plans with a common vision to limit disparities between Member States.

The main actions in the plan are setting enabling conditions for AI development and uptake in the EU, making the EU the place where excellence thrives from the lab to market, ensuring that AI works for people and is a force for good in society, and building strategic leadership in high-impact sectors. Any measures following this vision are highly encouraged and should also work hand in hand with the Proposal for Regulation.

Communication on Fostering a European Approach to Artificial Intelligence

The communication published by the European Commission serves as a brief and comprehensive summary and a rationale for the two other parts of the AI package. It gives context to the decisions that have been taken and the choices in policy, it explains the overarching steps of such regulations and stresses the importance of this matter.

Now that the framework has been set by the European Commission on AI policies and strategies, it is up to the other actors to implement this common vision locally or globally.

Food for thought
Artificial Intelligence is a disruptive yet unavoidable global innovation that hasn’t yet reached its full potential for better or for worse. The EU wants to play a leading role in shaping this space to help for a better understanding and a safer development of technologies using AI and mitigate bias and discrimination. Is the current EU approach sufficient and well-balanced? How can these ideas be concretely implemented by Member States and companies? Are there amendments or possible changes that should be discussed?

Furthermore, it is important to see how the rest of the world responds to such legislation. Taking into account that most AI technologies are not produced and developed inside of Europe, is there any guarantee that leading tech companies will obey these rules? And do they have the potential to become a global example such as GDPR standards?

It should also be noted that this seemingly very technical issue is rooted in a debate of ethical and social values. It is not so much the optimisation of AI and its algorithms that is at stake but rather the direct impact it can have on society. With that in mind, should engineers cooperate with social scientists or even philosophers to ensure new applications of AI do not present a risk to society and citizens' fundamental rights? And who should be ultimately responsible for keeping AI systems in check?
Essential Reading
1
a 10-page official EU document giving a more in-depth description of the general European AI strategy.
2
a short report by the European Institute of Public Administration (EIPA) on how the EU AI act will impact Member States.
3
constructive criticism on the EU AI act by the Human Rights Watch in a Q&A format. I highly recommend looking this one up, as it brings up flaws in the current regulation or points to be improved. You can benefit from the Q&A format to only look up questions that you find interesting.
4
an extensive report by the FRA on AI and fundamental rights. It is a very long document. I recommend reading only the first section Key Findings and FRA opinions (page 5-13).
5
an extensive article published by Cambridge University Press treating the future threats and opportunities of AI in Europe.
6
Real case of AI threatening Fundamental Rights
AI-powered recruitment can be racist or sexist – and here’s why
a short article published by DiversityQ, showcasing a practical example of AI-powered processes that go against Fundamental Rights, and how to improve it.
7
Informative Entertainment on AI
Coded Bias
a Netflix documentary treating how technology empowered by AI such as facial recognition inherits human bias and can have a detrimental effect on people. The scope of the documentary focuses mainly on the US but can be relevant nonetheless.
8
an easy-to-listen-to podcast addressing all sorts of different perspectives on AI. Some episodes can be relevant for this topic (around 30minutes per episode):
Episode 8: a review of AI research in different regions of the world with a focus on Europe. 
Episode 50: Eric Horvitz on Ethical Uses of AI for National Security
Episode 69: Google’s Kent Walker on Ethical AI
Made on
Tilda