Paving the way for the European Union’s Artificial Intelligence Act

Person using ChatGBT on mobile
HRHQ European Union AI

by Steven Whelan, Asociate in Public and Regulatory Department, Fieldfisher LLP

Artificial Intelligence (“AI”) is a technology that has been around for decades, however AI technology has advanced at an unprecedented rate in recent years with generative AI systems like ChatGPT at the forefront of this technology revolution. The increasing use of AI technology in content creation has sparked controversy in recent years, particularly regarding its use of copyrighted materials.

Lawmakers in the European Union (the “EU”) are leading the way in AI regulation and recently agreed to push forward draft legislation designed to regulate AI technology and the companies developing it. On 27 April 2023, members of the European Parliament (MEPs) reached an agreement on the provisions of the proposed landmark Artificial Intelligence Act. Initially proposed as a set of draft rules by the European Commission two years ago, details of the proposed Act will be finalised in the next round of deliberations among EU lawmakers and member states.

The proposed Act intends to classify AI tools according to their risk level, ranging from minimal to limited, high, and unacceptable. Under the proposed Act, high risk tools will not be automatically banned, but rather their use will require greater transparency. In this regard, European Parliament deputy Svenja Hahn commented as follows:

Against conservative wishes for more surveillance and leftist fantasies of over-regulation, parliament found a solid compromise that would regulate AI proportionately, protect citizens’ rights, as well as foster innovation and boost the economy“.

Consequently, Generative AI tools such as ChatGPT, Midjourney and Google Bard, considered high-risk, will be subject to more stringent transparency procedures and will be obligated to disclose any use of copyrighted materials used to train generative AI systems.

Furthermore, AI used to manage critical infrastructure which could potentially pose a severe environmental risk will be categorised as high risk. Other areas of scrutiny include biometric surveillance, spreading misinformation, and discriminatory language.

While controversies around AI and its use of copyrighted materials persist, the EU’s efforts to regulate AI aim to strike a balance between harnessing the potential benefits of the technology and protecting individual privacy and intellectual property rights.

In recent years, the possibilities for AI in sectors such as financial services, insurance, healthcare, life sciences, education, transportation, and telecommunications have become a reality with AI already augmenting human capabilities and changing the way we work.

Looking to the finance sector, there is now a clear convergence in finance, tech, innovation and AI with AI being used in fraud detection and algorithmic trading. Within the broader landscape of financial services, AI applications are being used in underwriting decisions in the insurance industry to assess risk using massive data sets and in claims processing. It is important that regulators keep pace with rapid changes in technology and changes in consumer behaviour to maintain a safe and stable financial system. The April 2023 edition of Views magazine by Eurofi, a European think tank comprising public and private enterprises dedicated to financial services, included a section on AI and machine learning applications in the area of finance in the European Union (the “EU”). The section featured five articles on AI and machine learning in financial regulation within the EU, all written in anticipation of the upcoming Artificial Intelligence Act. One of the authors, Georgina Bulkeley, the director for EMEA financial services solutions at Google Cloud, emphasised the importance of regulating AI properly, saying, “AI is too important not to regulate. And it’s too important not to regulate well.”

The proposed legislation builds on the existing regulatory framework in the EU and aims to ensure that AI is utilised in a way that is safe, ethical, and respects fundamental rights. As AI continues to develop and become more prevalent in our daily lives, it is crucial that robust and effective regulations are put in place to ensure that it can be used for the benefit of all.

About the author

Steven Whelan advises regulatory bodies on the implementation of statutory obligations under their governing legislation and other legislation arising (both national and EU).

His experience in financial regulation has involved him working on-site with a client on a large scale investigation into breaches of contractual and other regulatory standards.

In addition to being a Solicitor Stephen is also a qualified New York Attorney of Law, having experience as a Litigation Attorney representing a range of clients, from insurance companies to state bodies, in the New York Supreme Court.