AI Explainability 'an Afterthought' at Banks, But Financial Theory Infusion Could Help 

Eric Tham of the National University of Singapore said during the Innovation Exchange that explainability is an afterthought at banks when they develop their AI-driven models. Unsurprisingly, some bankers did not agree.

As financial firms increasingly turn to artificial intelligence for help with decision-making, and as these tools become increasingly sophisticated, ensuring that the users understand how the AI derives its outcome is becoming more difficult. And this idea of explainability is becoming top-of-mind for regulators.

Eric Tham, a senior lecturer at the National University of Singapore, contends, however, that it’s not explainability that is the greatest challenge for financial organizations using

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact or view our subscription options here:

You are currently unable to copy this content. Please contact to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Waterstechnology? View our subscription options

The IMD Wrap: Will banks spend more on AI than on market data?

As spend on generative AI tools exceeds previous expectations, Max showcases one new tool harnessing AI to help risk and portfolio managers better understand data about their investments—while leaving them always in control of any resulting decisions.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here