Europe’s AI Act is taking shape. How will the UK respond?

As the EU pushes through a historic AI Act, its neighbor is left wondering how to keep up.

In 2012, artificial intelligence was unable to compete with even the doziest of dogs. That year, Google researchers made a breakthrough by training a neural network of 16,000 processors on unlabeled YouTube stills. In a matter of days, the network taught itself to recognize pictures of cats.

The increase in computing power and the expansion of foundational datasets since then have changed AI from an abstract futuristic concept to a commodity. We are surrounded by it—from word processors predicting what we will type next to YouTube videos automatically generating subtitles. And the transformational potential for the capital markets is staggering.

But the meteoric rise of AI and its rapid adoption by financial institutions has left regulators wondering how best to oversee it. At a summit organized by City & Financial Global, spokespeople from various UK regulators grappled with the imperatives of stimulating innovation in such a rapidly advancing technology while ensuring its safe use.

“These are markets where we can’t just wait for things to go wrong before we step in. But neither do you want a heavy-handed intervention to stifle innovation,” said Will Hayter, senior director of the digital markets unit at the Competition and Markets Authority (CMA).

The problem with these models is that you can’t really intuitively understand them. So we’re a long way away from feeling as though we can really trust them
Chris Murphy, Ediphy

The question of regulating AI is receiving particular attention in the UK at the moment, largely prompted by discussions on the continent. The EU Parliament set out its proposals for an AI Act in June this year, defining standards for software and banning certain applications of AI.

The UK, which has stated its intention to become a “science and technology superpower by 2030,” is now under pressure to respond. In the summer of last year, the government published a policy paper proposing a light-tough approach centered on a set of cross-sectoral principles that AI developers and vendors would be asked to stick to, such as transparency, fairness, and security.

“I think that what the UK has done is quite sensible,” said Reena Patel, the head of data privacy for Emea at UBS, who also spoke at the summit. “The people who are developing or implementing the AI, particularly in their own sectors, are going to know the risks. So in terms of putting principles in place and saying, ‘You’ve got to assess that, the onus is on you,’ it makes companies more responsible for what they’re doing.”

However, the policy paper is only a first step, and the UK must now choose whether to continue down a low-touch, principles-based path or cleave closer to the EU.

One regulator to rule them all?

One area where the UK’s policy paper diverges strongly from the EU’s proposed AI Act is in its highly decentralized model. The existing regulators would be tasked with interpreting and implementing the list of principles as they see fit.

The EU, on the other hand, suggests a comprehensive set of rules. All AI products will be categorized into one of four risk levels. The highest-risk systems will be banned outright, and products in the third risk level will be subject to a conformity assessment and audit before being granted the CE mark—which indicates a product has passed certain tests—that will allow them to enter the market.

Another speaker at the City & Financial Global summit, Professor the Lord Darzi of Denham, called for the UK to follow the EU’s lead by setting up a new regulator dedicated to AI. “We know that there is a shortage of regulators well-versed in AI. They are like gold dust. So concentrating knowledge in a single regulator, in my view, would be more practical and more efficient. A single regulator would also be better able to respond swiftly to this fast-changing environment,” he said.

This suggestion was not met with enthusiasm by regulators, however. Stephen Almond, executive director for regulatory risk at the Information Commissioner’s Office, recognized that there are gaps in the existing regulation of AI, but said, “I think we risk oversimplifying the debate around how AI is going to have to be regulated by simply saying, ‘Let’s leave it to a single regulator.’ Because actually, I think what we need to do is understand where those gaps are.”

Key concerns

While they wait for further steps from the government, regulators are focused on the biggest threats posed by existing AI applications. Foremost among these is the thorny issue of explainability—working out how AI produced the results it did.

“The problem with these models is that you can’t really intuitively understand them. So we’re a long way away from feeling as though we can really trust them,” says Chris Murphy, CEO of trading analytics and execution services provider Ediphy.

So-called hallucination also poses a problem for firms deploying AI. This is particularly common in large language models (LLMs), which sometimes confidently give incorrect answers to questions posed by users.

“I don’t think anyone’s really quite figured out the hallucination problem, and that is super scary and dangerous in a highly controlled and restricted financial services industry. Rather than generative AI, I think it might be some future large models that are perhaps focused on a different problem set that will be transformative. I don’t think we’ve seen the killer approach to AI in financial markets yet,” says Murphy.

“While solutions are being developed, so far it seems unlikely that such hallucinations will be eradicated entirely from these models,” said the CMA’s Hayter. “So it’s really important that developers have the right incentives from the market to develop models that are as accurate as technically possible.”

In order to combat these problems, regulators agreed that they must focus not on the output, but on the datasets underpinning the models.

Jessica Rusu, the chief data, information, and intelligence officer at the Financial Conduct Authority (FCA) told attendees at the summit that “data considerations are of paramount importance to the safe and responsible adoption of AI. And the role of data in that responsible AI must depend on the quality, the management, the governance, as well as data accountability and ownership structures of the data, and the protection of it.”

According to Ediphy’s Murphy, the banks and asset managers of the future are likely to train private AI models based on their own datasets, but built on top of larger models produced by big tech players like Google and OpenAI.

“Financial institutions will have their own proprietary data, and they will want to commingle that with some specific licensed data from data providers, plus the data that is essentially encompassed in some of these large models that are being trained on GPU clusters. Then they’re going to train their own models on top of those datasets. How do they orchestrate all that? How do they navigate that data management and computing process, ensuring data privacy and integrity?” Murphy says. “I don’t think many banks are well set up to do that just yet, but that's the task at hand.”

Industry weighs in

Part of the challenge for regulators is the difficulty of assessing risks in new technologies that they do not use themselves.

“The regulators typically do have some consideration about not wishing to introduce more uncertainty and disruption in the market than they need to. So I think when it comes to AI, there’s obviously still work ongoing there to understand whether some of the current dynamics could have a detrimental effect on end outcomes to investors and consumers, and we know regulators are beginning to ask the right questions here,” Murphy says.

A delegate from IBM at the City & Financial Global summit pointed out that while the UK government is vocal about encouraging innovation, press releases publicizing its upcoming AI summit focus on safety, with the term “frontier AI” mentioned repeatedly.

But regulators stressed that the need to crack down on dangerous implementations of AI goes hand-in-hand with attempts to encourage responsible new innovations.

The FCA recently launched a permanent digital sandbox initiative to help firms integrate technologies, including AI, into their workflows.

“We have a vast reserve of synthetic data and other data assets that we can use to support innovation in AI. We’ve recently used the digital sandbox to support some AI testing in the greenwashing tech sprint,” Rusu told the FCA’s annual town hall.

Indeed, the UK’s efforts to present itself as an AI-friendly jurisdiction have raised some eyebrows in the industry, with some suggesting that the move could backfire.

“With the UK being so close to the EU, but not necessarily having as strict rules as the EU AI Act, will it become a test ground for AI before it hits the EU?” said UBS’ Patel, adding that many financial institutions will likely deploy similar AI models across jurisdictions, so they may have to comply with the most stringent regulation in any case.

Connor Wright, partnerships manager at the Montreal AI Ethics Institute, says that far from being constrained in their use of AI, firms may be hampered by an embarrassment of riches.

“I would worry that the financial industry is going to have so much generative AI that every problem starts to look like a nail. But there might be other solutions that are a little bit better suited, whether it be in-person meetings, different use of AI, or more focus on document analysis, for example. There can be an opportunity cost in using generative AI,” Wright says.

The global AI market is estimated to grow by as much as 42% a year for the next 10 years, and regulators are determined to keep up with the frenetic growth. By then, the technology may be as different from our current applications as predictive pricing is from cat recognition.

The time may not be far off when, with the help of LLMs, a bank’s systems can interpret regulatory changes, apply new code in its applications, perform automated testing, and determine whether or not the firm is compliant. Who knows? It might even put regulators out of a job.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

FCA declines to directly regulate market data prices

A year-long investigation by the UK regulator to determine whether competition is hindered in the wholesale data markets has concluded with its decision not to directly regulate much-maligned data pricing and licensing structures.

How GenAI could improve T+1 settlement

As well as reducing settlement failures, researchers believe generative AI can provide investment managers with improved research, prioritization, and allocation resources.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here