Need to know
Podcast Timestamps
5:00 Shameek joins the podcast and talks about the speed and focus that moving to a startup has given him.
10:00 While there is broad adoption of AI within capital markets firms, the depth may not be there.
11:30 The question cannot always be, ‘Why not use AI?’ It could also be, ‘Why do I use AI?’
14:30 Shameek says infrastructure for building and deploying ML models is still more art than science.
15:30 The barrier to AI in capital markets is the lack of trustworthiness and reliability of those models over time.
21:30 Not many firms have a mature data and infrastructure blueprint for AI innovation.
23:00 Shameek says that in some ways, the data translator role is a stop-gap measure.
26:00 How are regulators looking at the use of AI?
36:00 Shameek is excited about the use of data in addressing the current and next generation’s problems.
Shameek Kundu, former chief data officer at Standard Chartered, and now head of financial services and chief strategy officer at Truera, a startup dedicated to building trust in AI, joined the Waters Wavelength Podcast to talk about AI explainability and how regulators approach the use of emerging technologies.
One of the topics discussed was how regulators are approaching the use of AI and ML and how they could potentially introduce more prescriptive regulations around the use of these technologies within the capital markets. (26:00)
In April, US prudential regulators, led by the Federal Reserve, issued a request for information (RFI) on the uses of AI and machine learning. This move has led some to worry that new regulations could stifle innovation.
While Kundu believed that overall, regulators’ approach to AI and ML has been thoughtful and nuanced so far, if certain decisions are made against the use of some non-inherently explainable models, it could stifle innovation.
In response to the RFI by US prudential regulators, Kundu said there is a debate whether there is a place for only inherently explainable models versus non-inherently explainable models, otherwise known as post hoc models.
“My personal view on that would be there’s a place for both kinds of models. If you just limit it to the former, we will potentially inhibit innovation,” he said.
Examples of inherently explainable models include generalized linear models, generalized additive models, and decision trees, which by definition are inherently interpretable.
In comparison, non-inherently explainable models or post hoc models require explaining after predictions have been made or the model has been trained. Examples of these models are gradient boosted models and several types of neural networks.
Many image, text, and voice-related processing models fall in that category, he said. “There will probably be some categories where there isn’t an equivalent inherently explainable model that is anywhere close to the same level of performance today. That doesn’t mean it can’t change over time. But right now, there isn’t,” he said.
He explained that a workaround he’s seen some banks and asset managers use is to take the so-called ‘black box models’ to extract features as a pre-processing step and then incorporate them into the more inherently explainable models.
“In an inherently explainable model, you will not be allowed to say, ‘I don’t understand what happened in there,’ which means you need to know what, very simplistically, went into the funnel. And what you’re doing, in this case, is, you are deciding what to put into the funnel based on the output from a GBM, let’s say,” Kundu said.
“First, let’s try and justify what the GBM model said. Once we are convinced, now we can put it into our inherently explainable model as one of the factors for the decision making. So it takes away that regulatory or compliance risk because while a machine might have told you this might be a good feature, you’re actually assessing that yourself before you put it into the funnel.”
But again, he stressed that regulators aren’t out to stifle innovation.
“I genuinely think every regulator that I’ve spoken to—and probably across the world, there’s at least eight or nine major jurisdictions that I’ve spoken to on this topic—is approaching this in an extremely thoughtful and nuanced manner,” he said.
Taking the Monetary Authority of Singapore as an example, it has been three years since the regulator released a set of principles to promote fairness, ethics, accountability, and transparency (Feat) in the use of AI and data analytics in Singapore’s financial sector.
While there’s certainly regulatory guidance, as spelled out in the Feat principles, Kundu said there is yet a single prescriptive rule dedicated to the use of AI or machine learning.
Some jurisdictions may start coming up with more prescriptive rules, though. Even so, Kundu said the regulators’ approach has been “characterized by realism,” which is that this is an area that nobody has grasped fully, and it’s a space that’s rapidly evolving.
“I do think after two, three years of thinking about it, perhaps some of them will perhaps become more prescriptive in their guidance. But from every account I’ve had so far, it should not be something that stifles innovation too much. Of course, it will increase a level of governance and discipline as time goes by, but that’s to be desired,” he said.
Recent Waters Wavelength Podcast Interviews
Keiren Harris, founder of DataCompliance
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@waterstechnology.com
More on Emerging Technologies
Waters Wrap: CME, Google and the pursuit of ultra-low-latency trading
CME Group and Google have announced Aurora, Illinois, as the location for the exchange’s new co-location facility. Anthony explains why this is more than just the next phase of the two companies’ originally announced project.
This Week: Genesis/Interop.io; S&P Global; Finos/OS-Climate and more
A summary of the latest financial technology news.
GenAI: US Fed reveals its five use cases
Internal sandbox used to assess viability and risks; coding and content generation on the agenda.
Natixis refines in-house interoperability model
The French asset manager has refined its canonical data model over the last decade, as the interoperability movement continues to evolve.
UK asset manager: AI in macro trading ‘very overblown’; useful for nowcasting
The managing partner of Fulcrum Asset Management said that the firm has been developing nowcasting tools that even central banks have consulted on.
The coming AI revolution in QIS
The first machine learning-based equity indexes launched in 2019. They are finally gaining traction with investors.
Deutsche Bank works on standardized protocols for asset tokenization
The bank is looking at its role as an asset servicer to ensure the safety of tokenized assets and investor protection. It plans to have a limited prototype by November.
The IMD Wrap: Will banks spend more on AI than on market data?
As spend on generative AI tools exceeds previous expectations, Max showcases one new tool harnessing AI to help risk and portfolio managers better understand data about their investments—while leaving them always in control of any resulting decisions.
Most read
- IEX Cloud closure forces fintech clients to seek data alternatives
- Zeros and ones: Industry contemplates T+0 as the next step
- Natixis refines in-house interoperability model