A Flawed First Step, Not a Complete Solution.

23. January 2023

On 12 April 2021, the European Commission launched the EU AI Act (AIA) proposal.

The AIA is designed to introduce a common regulatory and legal framework for artificial intelligence and foster “trustworthy AI”. In doing so, it encompasses all sectors and all types of artificial intelligence. The ultimate aim is to ensure that AI systems are safe and respect existing laws on fundamental rights and EU values.

The world’s first AI law

Following months of internal negotiations, the EU Council adopted a common position in December 2022 – and the draft legislation took another step towards becoming the first attempt to legislate AI anywhere in the world. In its current form, the Act classifies risks by dividing applications of AI into three categories: prohibited, high-risk and low-risk. 

The three levels of AI risk

The first category relates to any subliminal, manipulative, or exploitative systems that cause harm; real-time, remote biometric identification systems used in public spaces for law enforcement; and all forms of social scoring. All would be banned by the Act. 

The second category covers applications such as systems that evaluate consumer creditworthiness, CV-scanning tools that rank job applicants, or any systems used in the administration of justice. As such, it would be subject to the largest set of requirements.

The third category comprises technologies such as AI chatbots, computer games, inventory management systems, and most other forms of AI. These systems would face significantly fewer requirements, primarily transparency obligations such as making users aware they are interacting with a machine.

With AI becoming increasingly embedded into everyday life, we asked Marian Gläser, CEO of brighter AI, for his view of this ground-breaking draft law.


So Marian, what is your initial opinion of the AI Act?

At first glance, it looks like a positive move. After all, AI is of paramount importance for our future. Similar to the famous (or infamous) EU GDPR, this act appears to be driven by values. Its stated aims are to “harmonize the legal setup within the 27 member states, while protecting Europeans from a new wave of software”. In doing so, it wants to steer us towards a fair, open, and equality-based future for artificial intelligence within the 27 member states. And that’s great…

There’s a ‘but’ coming…isn’t there?

You guessed! For me, the AI Act only represents a step in the right direction. It’s a long way from a complete and robust answer to the challenges of the future. I think that rolling out the legislation in its current state would be a mistake. It just doesn’t go far enough to attain its stated goal of “harmonizing the legal setup within the 27 member states, while protecting Europeans from a new wave of software”.

Okay, so what doesn’t work?

Despite the good intentions behind the Act, the legislation in its current state concerns me. To my mind, the lack of precise definitions, inadequate scoping, and the practicalities of actually applying it to the real world all make things rather problematic.

I also see a strong focus on high-risk applications – something that could lead to over- and under-regulation at the same time. 

On one hand, if the AI Act fails to adequately cover next-generation and currently unknown types of AI. Or, as another example, if definitions become outdated companies will find workarounds to avoid the regulations in the future. On the other hand, AI technologies that fall under the Act but do not carry any particular risks may potentially be overcontrolled which, in turn, could ultimately stifle innovation.

There are also potential downsides of AI that need to be considered like heavy algorithmic and societal bias and a lack of clarity. These downsides won’t prevent organizations from working on potential ‘life and death’ machine learning in areas such as self-driving vehicles, human judgment systems, and large scale facial recognition. Yet these applications are becoming more real and used day by day and so adequate regulation of intelligent autonomous solutions cannot wait for much longer.

So is this attempt to regulate AI taking the wrong approach?

I believe so, yes. Many technologies can be used for positive and, at the same time, more delicate or even dangerous applications too. My point is why focus on the technology, rather than the use cases themselves? It’s a bit like creating a law based on car engines to enforce the use of seat belts to promote safer driving. We need to regulate the applications and not the enabling technology.

How do you think it could work better?

The AIA is right in the spirit of the law, but not in practicalities. It might sound odd, but taking AI out of the AI Act would help. In other words, detach the technology from the use cases that need to be regulated. Or at the very least, if AI remains at the core of the Act, then the legislation should also encourage and strengthen positive use cases as well. For example, create dedicated areas with more flexibility and less regulation for pilot projects that aim to create positive uses of AI. After all, the technology itself is an opportunity and not a threat. AI offers limitless possibilities to evolve and improve privacy, health, and environmental solutions to name just a few.

And finally, how does the Act impact brighter AI?

As things stand, the AIA prohibits ‘real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement unless certain limited exceptions apply’. I’ve seen the European Digital Rights group has proposed stronger bans including prohibiting the use of remote biometric identification in public spaces, and a general ban on any use of AI for automated recognition of human features. 

As the CEO of a start-up that uses AI to anonymize faces and license plates, I’m obviously keeping a close eye on things. Our technology does not threaten European values but rather supports them, so I don’t perceive any immediate impact. Even so, the final EU AI Act could apply to us and, if so, it could pose a danger of putting the brakes on innovation in privacy protection. 

Marian Gläser
CEO/Co-founder of brighter AI