The EU AI Act aims to safeguard the rights of individuals whilst not stifling innovation

The EU's new AI Act has individual human rights as its primary focus, but how will those rights be protected under the Act?    
The EU AI Act aims to safeguard the rights of individuals whilst not stifling innovation

The EU has come down firmly on the side of individual human rights with the AI Act.

Just over a year ago, on Wednesday, March 13, 2024, the European Parliament gave final approval for the EU AI Act, with the rules coming into force the following August.

 The act aims to regulate AI technologies within the EU’s borders and attempts to balance innovation with user safety, fundamental rights and ethical considerations.

The legislation introduced a new tiered classification of AI systems according to their perceived level of risk. This allows for a modulated regulatory approach that varies in rigour in accordance with the level of risk involved and the potential impact of an AI system on users and society.

Four broad levels are defined — unacceptable, high, limited and minimal. Unacceptable risk covers AI applications that pose a clear threat to people’s safety, livelihoods or rights, such as manipulative or exploitative systems. These are banned outright.

The high-risk category includes AI systems with significant implications for individual rights or public safety, such as those used in critical infrastructure, education, employment and law enforcement. These applications are subject to stringent compliance requirements.

Limited risk AI applications involve some level of interaction with users, such as chatbots, and organisations deploying them are required to inform users that they are interacting with an AI system. Most AI applications fall into the minimal risk category and the regulation imposes minimal requirements for them, allowing for the broad development and use of such technologies.

It is clear that the EU has come down firmly on the side of individual human rights with the AI Act. But how will those rights be protected under the Act?   

Keith Power, partner with PwC Ireland.
Keith Power, partner with PwC Ireland.

 “The rights which the EU AI Act aims to protect are those set out in the EU’s Charter of Fundamental Rights,” explains PwC Ireland partner Keith Power. “The Act contains a number of mechanisms to try to achieve this protection, but it can be summarised in three parts — prohibition, risk mitigation, and education.” 

The Act prohibits a pre-defined list of AI uses which are deemed, by their very nature, to violate or pose an unacceptable risk of violating fundamental rights, he continues. 

“The risk posed by other AI uses is mitigated through the imposition of obligations on organisations to identify, assess, and address any risks they pose and, in some cases, to demonstrate that the AI system complies with existing technical and safety regulations. 

"And finally, the Act aims to empower EU citizens to make informed choices by being AI literate. While those are the intentions of the Act, its effectiveness in achieving those aims remains to be seen and may be a function of the extent to which the Act is enforced by each member state.”

 There has been something of a Boston versus Berlin debate ongoing in relation to different attitudes to AI regulation on opposite sides of the Atlantic. One view which has emerged is that the emphasis on human rights will constrain innovation and allow countries like the US to steal a march on Europe in this new technological frontier.

“The rhetoric around this viewpoint somewhat overstates the issue,” says Power. “By taking a risk-based approach, the burden of compliance with the EU AI Act falls very much on the minority of higher-risk AI systems. For most organisations the compliance effort will be limited and can be combined efficiently into a responsible AI framework alongside the other practical, non-regulation driven AI considerations such as data governance and security.” 

And there are ways to address potential constraints on innovation, he adds. “With the right funding and political will, EU-led initiatives such as regulatory sandboxes and industry-focused AI guidance, coupled with member state AI education and AI incentive programmes can further foster investment in AI innovation.” 

Ultimately, the balance between the needs of business and the rights of individuals comes down to values and the kind of society we want to live in, Power adds. 

“The EU values, which are specifically referenced within the EU AI Act, are human-centric and focused, in general, on the greater societal good. The Act does permit effective override of individual rights, but these are limited to, for example, specific law enforcement circumstances. The balance which the Act strikes may not be perfect, nor is it universally consistent with alternative regulatory approaches being taken in some other jurisdictions, but it does seem to be culturally relevant for the citizens it aims to protect.” 

And there is a practical benefit to this balanced approach, he points out. “Widespread AI adoption requires public trust. Appropriately enforced human rights-focused AI regulation is a cornerstone of establishing that trust.”

More in this section

The Business Hub

Newsletter

News and analysis on business, money and jobs from Munster and beyond by our expert team of business writers.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited