The capital markets’ appetite for AI is evolving ... slowly

Nyela checks the vibe of generative AI, which is slowly evolving from frenetic conversations to tangible tools and use cases.

A common expression used among those in my generation (Gen Z, that is) is “vibe check.” According to Urban Dictionary, a vibe check is “a spontaneous and usually random time where someone checks your vibe. A vibe check should usually be a pleasant experience where the person being checked is vibin’.” In other words, it’s a cooler way of saying you checked in on a friend to make sure they were having a good time.

This is a vibe check for the moment AI is having in the capital markets—and it’s not my first one. Last July, I wrote a six month state of the tech on generative AI and large language models, unpacking the various announcements and sentiments expressed in the six months since the world became obsessed with OpenAI’s ChatGPT and its underlying technology. 

Back then, it was clear that it would be a slow burn for many, but that data providers like Moody’s, Bloomberg, and S&P, to name a few, were already digging their heels in.

What wasn’t entirely clear at the time was how banks would approach the technology. Traditionally, one of the barriers to entry with any new technology is compliance headaches and the establishment of governance models. 

An informal survey by sibling publication Risk.net last summer found that at least six global systemically important banks (G-Sibs)—JP Morgan, Bank of America, Citi, Goldman Sachs, Wells Fargo and Deutsche Bank—had temporarily curbed the use of ChatGPT among employees. Morgan Stanley allowed limited personnel access but only for pre-approved use cases under rigorous governance and controls. But the appetite to explore the use cases was there.

And it still is. Banks, asset managers, and vendors are deep in their exploratory phases of generative AI while still looking to traditional AI to stick to the stuff it knows best. It’s breeding a best-of-both-worlds model that will likely reflect what utilization looks like across the industry in a few years. 

Conversations among end-users

2026. That was the year that Bank of New York Mellon CEO Robin Vince told investors and analysts on the bank’s 2024 Q1 earnings call that benefits to the expense line would be seen from its investment in AI technologies and supportive infrastructure.

Generative AI and large language models are going to augment what classic AI like machine learning and NLP are already powering

“We also see meaningful opportunity over the coming years from continued digitization and reengineering initiatives as well as from embracing new technologies,” Vince said. “To support this effort, we are making deliberate investments, enabling us to scale AI technologies across the organization through our enterprise AI hub.”

One of those investments is in the deployment of the Nvidia DGX SuperPOD, an AI datacenter infrastructure purpose-built for enterprises. Vince said BNY Mellon was the first bank to deploy the infrastructure, which will allow it to accelerate processing capability to innovate, reduce risk, and launch AI-powered capabilities. 

Among AI capabilities already in use, Vince pointed to current software that provides predictive trade analytics around settlement failures. Clients have the ability to look out for failures and take action, with some actions directly linking to other BNY Mellon platforms. 

The bank was also embracing AI’s strength in coding and deploying GitHub CoPilot to developers. “I was walking around one of our buildings the other day and was talking to one of our developers who has been out of school for a year and change,” Vince said. “And already they think they are 25% more productive as a developer and that’s in the very early days of using GitHub CoPilot.”

Armando Benitez, managing director and head of AI and quantitative engineering at BMO Capital Markets, detailed to me last month that the bank’s exploration of GenAI has mainly applied to internal use cases so far, which has been the norm for highly regulated banks. “We’ll be focusing on building what we call horizontal capabilities, and those horizontal capabilities are going to help us later build more complex solutions,” Benitez said.

In a low-risk approach, the bank is working to identify how it can make the code-writing process better. If a developer has a particular way of doing an SQL query, they can ask the LLM or show it a piece of code to receive a hint about what the code is doing. The ability to give a hint can take a user from zero to 80 and getting to that final 100 still keeps a human in the loop. So far, BMO has had success with the programming languages Python, C#, and JavaScript.

In Benitez’s summation, traditional problems will continue to be solved by traditional models and LLMs will continue to get better. “They’re fantastic synthesizers, [but] I don’t think they’re there yet with reasoning capabilities that would allow us to use it for the quantitative space,” he says. 

Still, he did not rule out a reality where the two could work in tandem. As an example, a well-established quantitative model could be analyzing market data and doing predictions. One could bring an additional input to the model from a quantitative measure that an LLM has produced. 

“So it is not necessarily that the LLM is going to replace the other, but it is more in conjunction with the traditional machine learning, using it as an input just to have another feature,” he says. “But I think the core part of the decision will be made by the traditional model and then the LLM is an added bonus.”

And while there is value in applying this technology internally, others say that just because something is flashy and new doesn’t necessarily mean it fits into the tech landscape and will reap rewards. Rachel Zhang, managing director and head of fixed income front office technology at Jefferies, told audience members at the North American Financial Information Summit in New York last week that there were important questions tech teams should be asking themselves.

“So a lot of people say, ‘Oh, I have to have an AI project.’ Okay, what is the business problem you are solving? What costs are you pumping into your project and how much more can you actually make?” she said. “I think a lot of money is wasted because the data is not ready, or the business case is actually not there.”

This leads me to once again make a comparison between this hype cycle with the 2015–2018 period when blockchain was on everyone’s lips, and the money was flying into consortiums, innovation labs, and startups. 

Many people, including my clairvoyant editor Anthony Malakian, will tell you that blockchain was a hammer looking for a nail. The best example might be the ASX’s eight-year attempt to make blockchain work as a replacement for its CHESS system which ended in the exchange derecognizing the software at a cost of A$250 million (US$168.3 million).

The stakes are quite different here and I think BMO’s work illustrates this best. It’s not about throwing out what you already have, but about finding avenues where the new stuff can make it better. 

Generative AI and large language models are going to augment what classic AI, like machine learning and NLP, is already powering. I have been writing for WatersTechnology for three years now, and while I’ve learned several valuable things in that time, one standout lesson has been that technology works better when things are brought together. The vibes on this one are good.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

A tech revolution in an old-school industry: FX

FX is in a state of transition, as asset managers and financial firms explore modernizing their operating processes. But manual processes persist. MillTechFX’s Eric Huttman makes the case for doubling down on new technology and embracing automation to increase operational efficiency in FX.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here