Europe’s AI Act is taking shape. How will the UK respond?
As the EU pushes through a historic AI Act, its neighbor is left wondering how to keep up.
In 2012, artificial intelligence was unable to compete with even the doziest of dogs. That year, Google researchers made a breakthrough by training a neural network of 16,000 processors on unlabeled YouTube stills. In a matter of days, the network taught itself to recognize pictures of cats.
The increase in computing power and the expansion of foundational datasets since then have changed AI from an abstract futuristic concept to a commodity. We are surrounded by it—from word processors predicting what we will type next to YouTube videos automatically generating subtitles. And the transformational potential for the capital markets is staggering.
But the meteoric rise of AI and its rapid adoption by financial institutions has left regulators wondering how best to oversee it. At a summit organized by City & Financial Global, spokespeople from various UK regulators grappled with the imperatives of stimulating innovation in such a rapidly advancing technology while ensuring its safe use.
“These are markets where we can’t just wait for things to go wrong before we step in. But neither do you want a heavy-handed intervention to stifle innovation,” said Will Hayter, senior director of the digital markets unit at the Competition and Markets Authority (CMA).
The problem with these models is that you can’t really intuitively understand them. So we’re a long way away from feeling as though we can really trust them
Chris Murphy, Ediphy
The question of regulating AI is receiving particular attention in the UK at the moment, largely prompted by discussions on the continent. The EU Parliament set out its proposals for an AI Act in June this year, defining standards for software and banning certain applications of AI.
The UK, which has stated its intention to become a “science and technology superpower by 2030,” is now under pressure to respond. In the summer of last year, the government published a policy paper proposing a light-tough approach centered on a set of cross-sectoral principles that AI developers and vendors would be asked to stick to, such as transparency, fairness, and security.
“I think that what the UK has done is quite sensible,” said Reena Patel, the head of data privacy for Emea at UBS, who also spoke at the summit. “The people who are developing or implementing the AI, particularly in their own sectors, are going to know the risks. So in terms of putting principles in place and saying, ‘You’ve got to assess that, the onus is on you,’ it makes companies more responsible for what they’re doing.”
However, the policy paper is only a first step, and the UK must now choose whether to continue down a low-touch, principles-based path or cleave closer to the EU.
One regulator to rule them all?
One area where the UK’s policy paper diverges strongly from the EU’s proposed AI Act is in its highly decentralized model. The existing regulators would be tasked with interpreting and implementing the list of principles as they see fit.
The EU, on the other hand, suggests a comprehensive set of rules. All AI products will be categorized into one of four risk levels. The highest-risk systems will be banned outright, and products in the third risk level will be subject to a conformity assessment and audit before being granted the CE mark—which indicates a product has passed certain tests—that will allow them to enter the market.
Another speaker at the City & Financial Global summit, Professor the Lord Darzi of Denham, called for the UK to follow the EU’s lead by setting up a new regulator dedicated to AI. “We know that there is a shortage of regulators well-versed in AI. They are like gold dust. So concentrating knowledge in a single regulator, in my view, would be more practical and more efficient. A single regulator would also be better able to respond swiftly to this fast-changing environment,” he said.
This suggestion was not met with enthusiasm by regulators, however. Stephen Almond, executive director for regulatory risk at the Information Commissioner’s Office, recognized that there are gaps in the existing regulation of AI, but said, “I think we risk oversimplifying the debate around how AI is going to have to be regulated by simply saying, ‘Let’s leave it to a single regulator.’ Because actually, I think what we need to do is understand where those gaps are.”
Key concerns
While they wait for further steps from the government, regulators are focused on the biggest threats posed by existing AI applications. Foremost among these is the thorny issue of explainability—working out how AI produced the results it did.
“The problem with these models is that you can’t really intuitively understand them. So we’re a long way away from feeling as though we can really trust them,” says Chris Murphy, CEO of trading analytics and execution services provider Ediphy.
So-called hallucination also poses a problem for firms deploying AI. This is particularly common in large language models (LLMs), which sometimes confidently give incorrect answers to questions posed by users.
“I don’t think anyone’s really quite figured out the hallucination problem, and that is super scary and dangerous in a highly controlled and restricted financial services industry. Rather than generative AI, I think it might be some future large models that are perhaps focused on a different problem set that will be transformative. I don’t think we’ve seen the killer approach to AI in financial markets yet,” says Murphy.
“While solutions are being developed, so far it seems unlikely that such hallucinations will be eradicated entirely from these models,” said the CMA’s Hayter. “So it’s really important that developers have the right incentives from the market to develop models that are as accurate as technically possible.”
In order to combat these problems, regulators agreed that they must focus not on the output, but on the datasets underpinning the models.
Jessica Rusu, the chief data, information, and intelligence officer at the Financial Conduct Authority (FCA) told attendees at the summit that “data considerations are of paramount importance to the safe and responsible adoption of AI. And the role of data in that responsible AI must depend on the quality, the management, the governance, as well as data accountability and ownership structures of the data, and the protection of it.”
According to Ediphy’s Murphy, the banks and asset managers of the future are likely to train private AI models based on their own datasets, but built on top of larger models produced by big tech players like Google and OpenAI.
“Financial institutions will have their own proprietary data, and they will want to commingle that with some specific licensed data from data providers, plus the data that is essentially encompassed in some of these large models that are being trained on GPU clusters. Then they’re going to train their own models on top of those datasets. How do they orchestrate all that? How do they navigate that data management and computing process, ensuring data privacy and integrity?” Murphy says. “I don’t think many banks are well set up to do that just yet, but that's the task at hand.”
Industry weighs in
Part of the challenge for regulators is the difficulty of assessing risks in new technologies that they do not use themselves.
“The regulators typically do have some consideration about not wishing to introduce more uncertainty and disruption in the market than they need to. So I think when it comes to AI, there’s obviously still work ongoing there to understand whether some of the current dynamics could have a detrimental effect on end outcomes to investors and consumers, and we know regulators are beginning to ask the right questions here,” Murphy says.
A delegate from IBM at the City & Financial Global summit pointed out that while the UK government is vocal about encouraging innovation, press releases publicizing its upcoming AI summit focus on safety, with the term “frontier AI” mentioned repeatedly.
But regulators stressed that the need to crack down on dangerous implementations of AI goes hand-in-hand with attempts to encourage responsible new innovations.
The FCA recently launched a permanent digital sandbox initiative to help firms integrate technologies, including AI, into their workflows.
“We have a vast reserve of synthetic data and other data assets that we can use to support innovation in AI. We’ve recently used the digital sandbox to support some AI testing in the greenwashing tech sprint,” Rusu told the FCA’s annual town hall.
Indeed, the UK’s efforts to present itself as an AI-friendly jurisdiction have raised some eyebrows in the industry, with some suggesting that the move could backfire.
“With the UK being so close to the EU, but not necessarily having as strict rules as the EU AI Act, will it become a test ground for AI before it hits the EU?” said UBS’ Patel, adding that many financial institutions will likely deploy similar AI models across jurisdictions, so they may have to comply with the most stringent regulation in any case.
Connor Wright, partnerships manager at the Montreal AI Ethics Institute, says that far from being constrained in their use of AI, firms may be hampered by an embarrassment of riches.
“I would worry that the financial industry is going to have so much generative AI that every problem starts to look like a nail. But there might be other solutions that are a little bit better suited, whether it be in-person meetings, different use of AI, or more focus on document analysis, for example. There can be an opportunity cost in using generative AI,” Wright says.
The global AI market is estimated to grow by as much as 42% a year for the next 10 years, and regulators are determined to keep up with the frenetic growth. By then, the technology may be as different from our current applications as predictive pricing is from cat recognition.
The time may not be far off when, with the help of LLMs, a bank’s systems can interpret regulatory changes, apply new code in its applications, perform automated testing, and determine whether or not the firm is compliant. Who knows? It might even put regulators out of a job.
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Regulation
FCA to publish bond tape tender details by end of January
Market participants must wait a month longer than expected for the regulator’s draft tender document, which will see several bidders vie for the chance to build the UK’s long-awaited consolidated tape for bonds.
Too ’Berg to fail? What October’s Instant Bloomberg outage means for the industry
The ubiquitous communications platform is vital for traders around the globe, especially in fixed income and exotic derivatives. When it fails, the disruption can be great.
New data granularity rules create opportunities for regtech providers
As evidence, Regnology increased its presence in North America with the addition of Vermeg's Agile business—its 8th acquisition in three years—following a period of constriction and consolidation in the market.
Bond tape hopefuls size up commercial risks as FCA finalizes tender
Consolidated tape bidders say the UK regulator is set to imminently publish crucial final details around technical specifications and data licensing arrangements for the finished infrastructure.
The Waters Cooler: A little crime never hurt nobody
Do you guys remember that 2006 Pitchfork review of Shine On by Jet?
Removal of Chevron spells t-r-o-u-b-l-e for the C-A-T
Citadel Securities and the American Securities Association are suing the SEC to limit the Consolidated Audit Trail, and their case may be aided by the removal of a key piece of the agency’s legislative power earlier this year.
BlackRock, BNY see T+1 success in industry collaboration, old frameworks
Industry testing and lessons from the last settlement change from T+3 to T+2 were some of the components that made the May transition run smoothly.
How ‘Bond gadgets’ make tackling data easier for regulators and traders
The IMD Wrap: Everyone loves the hype around AI, especially financial firms. And now, even regulators are getting in on the act. But first... “The name’s Bond; J-AI-mes Bond”