At its September plenary, the EESC welcomed the proposed Artificial Intelligence Act (AIA) and Coordinated Plan on AI.
The new legislation really places health, safety and fundamental rights at its centre, says the EESC, and resonates globally by setting a series of requirements developers both in and outside Europe will have to comply with if they want to sell their products in the EU.
"This is a new step for Europe towards retaining sovereignty in the area of AI," says Marie Françoise Gondard-Argenti, rapporteur of the EESC opinion on the AI Coordinated plan. "Competitiveness and ethics are not contradictory terms – quite the contrary! There is a challenge here and we need to look at it through the prism of European values. That will enable us to become world leaders in this field, all the while remaining true to our values".
Social scoring and redress: there is the rub
Weaknesses in the proposals are, amongst others, to be found in the area of "social scoring", in the EESC's view. The Committee flags up the danger of this practice gaining currency in Europe as it is doing in China, where the government can go so far as to deny people access to public services.
The draft AIA does include a ban on social scoring by public authorities in Europe, but the EESC would like to see it extended to private and semi-private organisations to rule out such uses as, for instance, to establish whether an individual is eligible for a loan or a mortgage. The EESC sees no place in the EU for scoring the trustworthiness of its people based on their social behaviour or personality characteristics, regardless of who is doing the scoring.
"It is important that the AIA halts the current trajectory of public and private actors using ever more information to assess, categorise and score us", says Catelijne Muller, rapporteur of the EESC opinion on the AIA, author of EESC's first, trail-blazing opinion on AI in 2017. "The AIA should attempt to draw a clear line between what is considered 'social scoring' and what can be considered an acceptable form of evaluation for a certain purpose. That line can be drawn where the information used for the assessment is not reasonably relevant or proportionate".
The EESC also points out the dangers of listing "high-risk" AI, warning that this listing approach can normalise and mainstream quite a number of AI practices that are still heavily criticised. Biometric recognition including emotion or affect recognition, where a person's facial expressions, tone of voice, posture and gestures are analysed to predict future behaviour, detect lies and even to see if someone is likely to be successful in a job, would be allowed. And so would assessing, scoring and even firing workers based on AI, or assessing students in exams – a practice which has crept in during the pandemic and has been judged extremely invasive by students, with AI systems following their eye movements in front of screens, key strokes, background noise, etc.
In addition, the proposed requirements for high-risk AI cannot always mitigate the harm to health, safety and fundamental rights these practices pose. Hence the need to introduce a complaint or redress mechanism for people suffering harm from AI systems. The EESC flags up this gap, asking the Commission to implement such a system so that Europeans have the right to challenge decisions taken solely by an algorithm.
More generally, in the EESC's view, the AIA fails to spell out that the promise of AI lies in enhancing human decision making and human intelligence. It works on the premise that, once the requirements for medium- and high-risk AI are met, AI can largely replace human decision making.
"The AIA lacks notions such as the human prerogative of decision making, the need for human agency and autonomy, the strength of human-machine collaboration and the full involvement of stakeholders," says Cateljine Muller. "We at the EESC have always advocated a human in command approach to AI, because not all decisions can be reduced to ones and zeros. Many have a moral component, serious legal implications and major societal impacts, such as on law enforcement and the judiciary, social services, housing, financial services, education and labour regulations. Are we really ready to allow AI to replace human decision making even in critical processes like law enforcement and the judiciary?"
The Artificial Intelligence Act and the Artificial Intelligence Coordinated Plan were presented by the European Commission in 2021 and 2020 respectively.
The AIA sets out a horizontal regulatory framework that encompasses any AI system affecting the single market, whether the provider is based in Europe or not, using a risk-based approach. It also sets up a series of escalating legal and technical obligations depending on whether the AI product or service is classed as low, medium or high-risk. A number of AI uses are banned outright.
The new Coordinated Plan succeeds the 2018 one, which established a joint commitment by the European Commission and Member States to work together to maximise Europe's potential to compete globally and led to most Member States adopting national AI strategies. The new one is the next step, putting forward a concrete set of joint actions for the European Commission and Member States on how to create EU global leadership on trustworthy AI.
The EESC has been a leading contributor to the debate on AI in Europe ever since its first pioneering opinion on AI in 2017, which contained many of the defining elements of the AI strategy later adopted by the Commission. First and foremost, it included the human-in-command approach which underlies all of its work on AI.
Continuing its flurry of activity on AI into 2021, the EESC will be hosting its second AI Summit on 8 November, organised in cooperation with the European Parliament's Special Committee on AI (AIDA). The summit will focus on the AIA, skills and the uptake of AI in SMEs.