Firms worry that lack of ‘explainability’ will be regulatory roadblock for AI

Industry experts share their concerns about advanced AI’s ‘black-box’ nature and how that may attract fragmented regulatory scrutiny.

Those responsible for leading AI initiatives at financial firms fear that the opaque nature of generative AI models and the large language models (LLMs) on which they are trained may significantly delay its uptake for certain tasks as regulators seek greater clarity into its use and operation.

At last week’s North American Financial Information Summit, hosted by WatersTechnology, speakers on different panels repeatedly cited the challenge of explainability—i.e., the ability to explain how and why

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact or view our subscription options here:

You are currently unable to copy this content. Please contact to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Waterstechnology? View our subscription options

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here