Former Standard Chartered CDO details AI in capital markets and regulators’ approach

Waters Wavelength Podcast Interview Series: Shameek Kundu, now head of financial services and chief strategy officer at AI startup firm Truera, talks AI, the state of the art of infrastructure, and how regulators are keeping a keen eye on the space.

Podcast Timestamps

5:00 Shameek joins the podcast and talks about the speed and focus that moving to a startup has given him. 

10:00 While there is broad adoption of AI within capital markets firms, the depth may not be there. 

11:30 The question cannot always be, ‘Why not use AI?’ It could also be, ‘Why do I use AI?’

14:30 Shameek says infrastructure for building and deploying ML models is still more art than science.

15:30 The barrier to AI in capital markets is the lack of trustworthiness and reliability of those models over time. 

21:30 Not many firms have a mature data and infrastructure blueprint for AI innovation.

23:00 Shameek says that in some ways, the data translator role is a stop-gap measure. 

26:00 How are regulators looking at the use of AI?

36:00 Shameek is excited about the use of data in addressing the current and next generation’s problems. 

Shameek Kundu, former chief data officer at Standard Chartered, and now head of financial services and chief strategy officer at Truera, a startup dedicated to building trust in AI, joined the Waters Wavelength Podcast to talk about AI explainability and how regulators approach the use of emerging technologies. 

One of the topics discussed was how regulators are approaching the use of AI and ML and how they could potentially introduce more prescriptive regulations around the use of these technologies within the capital markets. (26:00)

In April, US prudential regulators, led by the Federal Reserve, issued a request for information (RFI) on the uses of AI and machine learning. This move has led some to worry that new regulations could stifle innovation.

While Kundu believed that overall, regulators’ approach to AI and ML has been thoughtful and nuanced so far, if certain decisions are made against the use of some non-inherently explainable models, it could stifle innovation.

In response to the RFI by US prudential regulators, Kundu said there is a debate whether there is a place for only inherently explainable models versus non-inherently explainable models, otherwise known as post hoc models.

“My personal view on that would be there’s a place for both kinds of models. If you just limit it to the former, we will potentially inhibit innovation,” he said.

Examples of inherently explainable models include generalized linear models, generalized additive models, and decision trees, which by definition are inherently interpretable.

In comparison, non-inherently explainable models or post hoc models require explaining after predictions have been made or the model has been trained. Examples of these models are gradient boosted models and several types of neural networks.

Many image, text, and voice-related processing models fall in that category, he said. “There will probably be some categories where there isn’t an equivalent inherently explainable model that is anywhere close to the same level of performance today. That doesn’t mean it can’t change over time. But right now, there isn’t,” he said.

He explained that a workaround he’s seen some banks and asset managers use is to take the so-called ‘black box models’ to extract features as a pre-processing step and then incorporate them into the more inherently explainable models.

“In an inherently explainable model, you will not be allowed to say, ‘I don’t understand what happened in there,’ which means you need to know what, very simplistically, went into the funnel. And what you’re doing, in this case, is, you are deciding what to put into the funnel based on the output from a GBM, let’s say,” Kundu said.

“First, let’s try and justify what the GBM model said. Once we are convinced, now we can put it into our inherently explainable model as one of the factors for the decision making. So it takes away that regulatory or compliance risk because while a machine might have told you this might be a good feature, you’re actually assessing that yourself before you put it into the funnel.”

But again, he stressed that regulators aren’t out to stifle innovation.

“I genuinely think every regulator that I’ve spoken to—and probably across the world, there’s at least eight or nine major jurisdictions that I’ve spoken to on this topic—is approaching this in an extremely thoughtful and nuanced manner,” he said.

Taking the Monetary Authority of Singapore as an example, it has been three years since the regulator released a set of principles to promote fairness, ethics, accountability, and transparency (Feat) in the use of AI and data analytics in Singapore’s financial sector.

While there’s certainly regulatory guidance, as spelled out in the Feat principles, Kundu said there is yet a single prescriptive rule dedicated to the use of AI or machine learning.

Some jurisdictions may start coming up with more prescriptive rules, though. Even so, Kundu said the regulators’ approach has been “characterized by realism,” which is that this is an area that nobody has grasped fully, and it’s a space that’s rapidly evolving.

“I do think after two, three years of thinking about it, perhaps some of them will perhaps become more prescriptive in their guidance. But from every account I’ve had so far, it should not be something that stifles innovation too much. Of course, it will increase a level of governance and discipline as time goes by, but that’s to be desired,” he said.




Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact or view our subscription options here:

You are currently unable to copy this content. Please contact to find out more.

The IMD Wrap: Will banks spend more on AI than on market data?

As spend on generative AI tools exceeds previous expectations, Max showcases one new tool harnessing AI to help risk and portfolio managers better understand data about their investments—while leaving them always in control of any resulting decisions.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here