Skip to main content

Model risk in the age of generative AI

Banks are racing to understand the risks posed by a new breed of multi-purpose bots.

Risk-0426-NBillustration-EoinCoveney-Final
Credit: NB Illustration/Eoin Coveney

The dizzying rise of artificial intelligence tools such as Anthropic’s Claude and OpenAI’s ChatGPT has risk managers wrestling with an existential question: when is a model not a model?

Banks are deploying generative AI for a growing number of tasks from the mundane to the miraculous: writing emails, assessing credit risk, predicting trade fails. 

Executives are now having to decide whether applications that are built on top of large language models, or LLMs, should be treated as models for risk

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: https://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Waterstechnology? View our subscription options

Register for free

Access two articles, our IMD and Waters Wraps, plus a member newsletter. Find out more.

All fields are mandatory unless otherwise highlighted.

Show password
Hide password

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here