Gold standard: Are golden copies losing their luster?

The concept of a “golden copy” is well established. But what happens when buy-side firms want to differentiate themselves by launching new services, only to find themselves maintaining multiple “single” sources of data—or worse, none at all?

The capital markets are awash with data. Positively drowning in it. Market data, reference data, alternative data, risk data, compliance data. Most of this—the real-time market data and static reference data components—are well managed using established market data platforms and enterprise data management platforms that serve as a single, accurate record of securities metadata, which other systems reference—the “golden copy.”

This ensures that different parts of an organization are using consistent data, no matter what their function. It reduces the risks and costs associated with things like failed trades, inaccurate risk management or incorrect valuations resulting from mismatches between data values or security identifiers. But there are some tasks that—by accident or necessity—draw on data that sits outside of these golden copies, which increases costs, reduces efficiency, and potentially creates incorrect reporting. At best, incorrect data is embarrassing. At worst, depending on what the data is used for, it can result in regulatory fines.

David Sellors is head of performance, client and fund reporting at Jupiter Asset Management. “I’ve worked for various companies over 40 years, and I have not yet come across one that has data completely sorted out,” he says.

The problem is partly caused by mergers between financial firms and the challenges of integrating data assets, as well as the complexities of switching suppliers and technology infrastructures, and the different information needs of different business areas, which draw on data at different frequencies, he says.

“The front office needs live data, but funds may be priced at a different price point. Yet for reporting and presentations, they are often drawing on the same data,” Sellors says. “There may also be timing issues between up-to-date reporting and month-end reporting.”

As an example, he says that mid-month, there may be a fund manager change. As a result, that entity may want to present new information online. But, if the firm is reporting at month-end, it doesn’t want that to show that change because it is still reporting as at the previous month.

The result, he says, can increase costs and complexity, and make it harder for firms to manage their data, making it even harder for them to fix things later as those datasets have gotten bigger and more complicated.

“You want to avoid a cottage industry growing up around each different user case—that just increases your headcount and complexity. Even though you have to meet short-term client demands, you have to do something about it for the long term, or you end up with a huge project down the line where you have to unravel everything that you’ve built up,” he says. “It’s a constant tactical-versus-strategic prioritization—and that’s hard to do.”

More data, more problems

Multi-channel managers are especially struggling to make sense of different data sources, says Patrick Murray, founder and CEO of operations technology platform provider STP Investment Services.

“When you have multiple vendors, and multiple integration points, the likelihood of data problems increases,” he says.

When firms don’t properly manage those data sources, it makes it harder for users to interpret the data. “It’s amazing that even today, when events like the BP oil spill or the Ukraine-Russia war occur, many firms still don’t have the capability to understand in a timely manner things like issuer or country exposure across verticals due to unreliable or disjointed data programs.”

Of course, this shouldn’t be the case.

In the early 2000s, the industry undertook a big push to clean up its reference data, motivated largely by trade failures caused by incorrect reference data about the securities being traded. However, errors don’t just result in costly failed trades—there’s also regulatory and reputational risk to consider.

For example, says Jamie Keen, founder and CEO of UK-based low-code automation platform vendor FundSense, if a fund misses a dividend because of not properly managing corporate actions data, it could end up reporting incorrect performance numbers. Or perhaps the name of the fund manager hasn’t been updated. That’s a reputational risk issue: you won’t get fined, but you’ll look stupid in front of clients.

But, warns Abbey Shasore, CEO of UK-based reporting and automation tools provider Factbook, incorrectly listing the name and contact details of the distributor could result in fines. “It’s as important as misrepresenting the fund’s risk or the investor’s tax status,” he says.

So, why does this problem still exist? Those attempting to solve the problem blame, first and foremost, industry consolidation. When firms merge and integrate, they also need to integrate their data into a new single source, or risk discrepancies and duplication, says Keen.

“Now, all these firms have chief data officers, and data is a big part of their manifestos. But asset managers have always been very siloed, and often even have siloes within siloes,” he says. “So they may have multiple ‘golden sources’ for different purposes that arrive at different times and have to be fired off to clients at different times.” The cause can be partly technology-related, but also sometimes political, as business lines clash over who should pay for a large data management project.

This can especially be true following mergers, where execs from previously separate organizations are jockeying for positions within the newly merged entity. But on the other hand, a merger can light the way forward. In Jupiter Asset Management’s case, its 2020 acquisition of Merian Global Investors (previously Old Mutual Global Investors) forced the firm to take a critical look at its data infrastructure.

“In a way,” Sellors says, “the Merian takeover was a trigger for Jupiter because there were two firms and two sets of data, processes, and suppliers that needed to be merged together. Our approach was, ‘Neither is ideal, so why don’t we take the time to get it right.’ So we invested in putting appropriate thinking and resources into it, so it’s fit for purpose for the long term.”

He also warns that projects of this scale take time and commitment. Jupiter started the project about a year ago and probably has another nine months to go until it sees meaningful results, such as a single, fully-integrated security master.

However, once complete, the benefits are clear: It will future-proof the firm against future regulatory data requirements—or at least make them easier to fulfill—and allow it to expand its range of services without huge increases in headcount.

“It will set us up to handle more changes,” he says. “You can ensure efficiency, timeliness and accuracy, so you can get reports and data to clients quicker, and you can improve scalability so you can grow the fund size or number of funds, or create more custom reports with the same number of people.”

Indeed, though still underway, the project is already yielding results. While it might be considered a small win, there’s a stock-level naming convention that recognizes and normalizes when different administrators are referring to the same security, whether their own convention is to format it in capital letters or lowercase. The firm is also able to roll out new investment reports with the assistance of tech partner Factbook.

In addition, Jupiter is rolling out about 70 new quarterly investment reports to cover all of its funds—something that Merian already offered, but Jupiter only provided commentary for, rather than full quarterly reports. The new reports provide quarterly information on returns, market context, performance, and outlook. The first 30 went live in late June and early July, and Jupiter will roll out significantly more at the end of September.

Automation is key to making the process scalable, Sellors says. Jupiter sends data to Factbook via secure FTP. Once Factbook has a base layer of data, it generates a draft report and sends it to the fund manager to add commentary, which is then edited, sent back to Sellors’ team to perform quality assurance, and turned into a PDF—which also generates translated versions in different languages—to be distributed to clients.

The ability to provide greater insight and transparency to ever-more sophisticated investors—both institutional and retail—can be a differentiator for funds to attract and retain business, and to offer different service levels across different client tiers.

Better data, better service

“If you ask the majority of firms whether they would like to do more reporting, they’d say yes, but they are constrained by time and cost, and how much personalization they can provide,” says FundSense’s Keen.

But simply increasing headcount to provide the kind of individual, white-glove service that clients desire isn’t practical. Automation is key to being able to provide more services to more clients, as well as to the important process of validating data that is held in a golden copy—or often, in disparate systems and applications that feed into a range of report types, from composition reports of holdings data, performance data, static data such as the fundamentals of a product and the fund manager and their track record, as well as market reviews and text context.

“Investment reporting sounds very normal but is actually a very broad spectrum of applications—from institutional client reporting to fund factsheets—and customers include market data, reporting systems, performance and accounting,” says Factbook’s Shasore. “The number of reports depends on the type of investors and the number of clients. For example, a firm managing institutional money might decide to create a standardized report for a small number of clients. Others might write new reports whenever clients ask for them, so they can create tiered service levels.”

But ensuring the data going into those reports is accurate requires careful validation, which becomes exponentially more time consuming and open to error if a firm is storing datasets from multiple vendors and third-party administrators separately rather than in a single master database.

“People often have the data in spreadsheets and export that into reports, but they should be looking to automate that and add structure to it. And that means having a golden source they can rely on,” Shasore says. “The magic ingredient is the golden source. The amount of data in an asset manager’s world view is colossal—and that’s just the data that affects reporting. So you need to commit it to a single source … and put it through all the checks and balances one time. Often, firms have some data in silo A for one purpose, and other data in silo B for another purpose, and so on, and they’re performing those checks and validating it multiple times.”

That duplication is not only costly and inefficient, but could introduce inconsistencies. “Validation is a huge, huge burden. First, you don’t want to get anything wrong because of the reputational issues that it could cause. And second, the penalties in the retail environment can be huge. And tiny fractions can make all the difference,” he adds.

Getting it right

Client reporting is a key area where getting the right data from the right source to the right place at the right time is critical to better informing and serving existing investors, as well as for attracting new ones.

These were the main reasons behind San Francisco-based asset manager RadiantESG’s decision to create a series of automated fund factsheets targeted at current and potential investors.

“A lot of the time, people think of quantitative processes as a black box—and we want to be the antithesis of that,” says RadiantESG CEO Heidi Ridley. “We want to be able to show investors not just the top 10 positions, but all positions, and demonstrate to clients what’s driving that positioning and the performance implications.”

That’s easier said than done. To achieve it, one must first have the data, ensure that it’s accurate and organized, and have systems in place to make it accessible from wherever it’s stored to the point where it becomes a part of the factsheets. In RadiantESG’s case, the firm doesn’t use scores from ESG data vendors, but sources raw data from third parties and NGOs, and applies natural-language processing tools to news and company reports to determine “credible intent”—the firm’s own confidence signal that a company will make good on specific ESG claims.

To create the factsheets, RadiantESG turned to its outsourcing partner STP Investment Services, which the firm already uses to outsource operations, compliance, and its trading desk, allowing the firm to focus on investing and its alpha engine.

RadiantESG COO Jennie Klein says STP already has existing processes for creating factsheets, but that the firm specifically wanted to automate the provision of its data to the vendor, rather than manually logging information into a portal, which could introduce human error.

The result, Ridley says, will be yet another way—in addition to its rigorous ESG selection and categorization processes—that the firm can differentiate itself from its competition.

“With many ESG funds, you don’t see a lot of actual ESG data in their reports; maybe in the commentary. STP has the infrastructure to enable us to show ESG KPIs from information we provide to populate factsheets,” she says.

At another client, a large regional US bank and multi-channel manager, STP is using its BluePrint data warehouse to do the opposite: instead of getting data into factsheet documents like at RadiantESG, it’s helping the client firm get data out of documents. Specifically, the firm is extracting instructions from text-based investor policy statements—a client’s instructions about what their funds can and cannot be invested in—and getting that data into the systems and workflows of executing trades. In short, the system becomes a unified data warehouse for extracted data across business lines, rather than having disparate systems for each source.

“Our BluePrint Compliance Engine came about during a discussion with a client partner who wanted to elevate the management of their fiduciary responsibility by systematizing it and making it more digital,” says STP’s Murray. “Specifically, one area they came up with was the management and testing of compliance with the investor policy statements (IPSs) and the fundamental fiduciary responsibility that comes with adhering to these important guidelines. We discovered that these can be very ambiguous and outdated, often exist in unstructured PDF formats, and are maintained manually. Whenever you’re dealing with that combination, there’s risk involved.”

For example, an asset allocation rule within a client’s IPS may specify that the investor wants no more than 20% of their funds invested in alternatives. But if hedge funds have performed well and long-only, blue-chip funds have done badly, then the allocation may have crept up to 25%, requiring the firm to trade out of some of those alternatives. Testing daily to understand when a fund may be about to breach such limits and alerting the manager requires the ability to extract and understand the client instructions, a rules engine to implement them, and market data to monitor market exposure.

So, in partnership with the firm’s chief risk officer, STP set about validating the IPSs, ensuring they were up to date, and plugging holes in the data. Where ambiguity existed, STP put the whole text into one column in the database. In the second column, it extracted clearer and more specific phrasing from the IPS text, and in the third column converted that into code establishing rules for each element of the IPS. STP then pointed those rules to the data sources.

“In my experience, the single biggest problem with testing rules engines—specifically, compliance engines—is that the data is unavailable. You might get 100 exceptions, 99 of which are data exceptions because the data isn’t there. Ideally, the data should be in a warehouse or accessible through an API. At STP, we work with both structured and unstructured data formats and move the data through our validation process into our BluePrint data warehouse. When it doesn’t make sense to move data from certain sources into a warehouse, we set up the appropriate API connectivity that still enables you to effectively test the rules and associate action plans to the real exceptions, eliminating all the unnecessary noise that can exist,” Murray says. As a result, the client has seen increased productivity and a huge reduction in risk, he adds.

Ideally, that database would be a firm’s enterprise golden copy, which should be flexible enough to store any data type. But sometimes data is stored separately because, as in the case of the unnamed firm’s investor policy statements or RadiantESG’s proprietary ESG data, it’s such a unique dataset. Another example is tax data and the impact of a trade on the tax profile of a fund and its distributions. While pre-trade risk checks that validate the expected outcome of a trade, its potential gains or losses, and any potential risk it creates are commonplace, assessing the tax impact of a trade is less widely considered, but important for some buy-side firms.

Reflecting pool

Thus, in some cases, the complexity of a firm’s data “pool” and accessibility reflects the complexity of the assets or markets themselves.

Salina, KS-based Rocky Mountain Advisers runs a $1.6 billion closed-end fund, which has specific distribution and reporting requirements, along with a need for specialist tax datasets.

“There are a lot of nuances to closed-end funds that many other investment vehicles don’t have to deal with. For example, we pay quarterly distribution returns set in advance, so we have to be aware of the tax implications of any trades we make,” says Jacob Hemmer, associate portfolio manager at Rocky Mountain.

In the past, obtaining that data would involve phone calls and collating it in a spreadsheet. The firm also reports quarterly to its board and produces annual shareholder reports, which used to require him to look up data on Bloomberg and Morningstar and manually input it into spreadsheets and charts.

“Our goal is to make investment decisions, not spend time on record keeping and reporting. So if I don’t have to spend a day gathering information, that’s more time I can spend researching investments and benefit our shareholders,” Hemmer says.

Given its size, the firm doesn’t have a golden copy—unless you count those manually maintained spreadsheets. Recognizing that it needed better ways to obtain and maintain its data, last November the firm went live on a cloud-hosted environment operated by Paralel Technologies, a recent startup specializing in serving the back-office needs of smaller fund managers.

Part of the decision was driven by cost efficiencies, and part by a need to offload unnecessary administrative burdens, freeing up staff to focus on making money. Originally, the firm performed all the back-office tasks in-house, which Hemmer says isn’t really efficient for just one fund. Likewise, it could track down K-1 and 19-A filings itself. “But that would be a ton of work and subject to error,” he says.

Paralel’s data environment is hosted in Amazon Web Services’ cloud, and ties together best-of-breed platforms from third-party vendors—Investment Accounting Manager (formerly InvestOne) from FIS for fund accounting, PowerAgent from Envision Financial Systems for transfer agency services, and Confluence for administration and performance—with “the best data lake I’ve ever seen,” says Jonathan Vickery, CTO of Paralel.

Vickery has an intimate understanding of the data challenges facing funds, having spent 23 years at Brown Brothers Harriman in various product roles serving the firms’ buy-side clients, including product owner for its Fund Investor Solutions business, head of client service for the Americas, head of middle office product and head of fintech products, before joining Paralel last year.

Following BBHs acquisition of Oppenheimer & Co.’s middle office business in 2014, Vickery moved to Denver to run that business, integrate it with other BBH assets, and turn it into a product that BBH could sell to asset managers. The resulting middle-office platform was a data services story with $700 million in assets.

“But we were always trying to fill potholes from 10 years earlier. There were different databases, the data didn’t always match, and there were unique connections between different areas and systems,” he says. When he met Paralel founder and CEO Jeremy May, he recognized that the biggest challenge was the cohesion—or lack thereof—between data sources feeding different applications. So, Paralel’s environment provides a transport layer for getting data in and out of the vendor apps it works with, and providing a portal for clients to query the data.

The result is clients get data more quickly, more accurately, and better reconciled, Vickery says.

Another hard-to-obtain dataset for Rocky Mountain is month-to-date total returns for a fund. Previously, that would require querying Bloomberg, but once the firm requested it, Paralel added it to the client portal.

“We’re a small shop, so we don’t have a Bloomberg terminal for every member of the team. So this has also allowed members of the team who don’t have a Bloomberg to get access to the data they need,” Hemmer says.

Another growing area involving complex datasets is secondary markets in privately held equities and other private markets, such as real estate. Not only are these maintained separately from participants’ golden copies, but are often paper-based and not even digital.

“The private markets are still very immature, and one problem that kept coming up was the unstructured data issue. You receive a large amount of data from fund managers on paper—fund financial statements, performance reports, and so on. And that amount of data traveling around in documents was untenable. People want data, not documents,” says Michael Aldridge, president, chief revenue officer, and co-founder of Accelex, a startup AI platform for extracting and reporting private markets data. “Some of the performance reports I’m talking about can run for 100 to 200 pages about performance of the fund’s holdings.”

Accelex ran a proof-of-concept project in 2019 with early adopters to use technology to digitize and automate that data. The vendor started by digitizing and automating long, quarterly reports sent to investors by general partners, then added capital account statements, followed by cash flow notices, and is now adding fund financial statements—a measure of a fund’s efficiency and performance that’s a legal requirement in Switzerland, and which Aldridge says should be a part of any firm’s due diligence in selecting fund managers.

And private markets don’t just cover equities markets—there’s also private debt, real estate, and other asset classes. In real estate, for example, there are data elements that don’t lend themselves to inclusion in traditional golden copies, such as square footage, occupancy rates, and property type.

So, with data volumes and data types defying pigeonholing and shoehorning into traditional golden copies, and with more golden copies springing up inside firms, what’s the solution? Not all firms have the inclination to pursue—or frankly, can afford—big rip-and-replace data integration projects. The good news is that there is a modern-day solution to this age-old challenge. And it’s a solution that’s a natural evolution of firms’ existing efforts to adopt cloud computing more widely.

Cloud: silver lining for golden copies

Mike Meriton is one of those in the industry credited with inventing the term “golden source”—indeed, for many years he was CEO of the data management company Goldensource. Now, as co-founder and COO of industry body the EDM Council, he says the concept of a golden copy may now be outdated, yet remains something that the industry still grapples with.

“The reality is that this is still a challenge for many companies—and it’s not just limited to asset managers and the buy side,” he says, highlighting three compounding challenges, the first two of which have been recurring themes so far: continuing M&A, creating the need to integrate multiple golden copies; the fact that there’s more data—and more sources of data—than ever before; and that there is more transformation of data going on—for example, the data housed in what one business line refers to as its golden copy may actually be derived from data held in what other parts of a business believe to be their golden copy.

Meriton says the solution lies in new tools being developed and rolled out to manage multi-cloud environments as the industry more broadly adopts the cloud.

“If you look at cloud, multi-cloud, and integrating other data sources, the concept of a golden copy becomes harder to define,” he says. In fact, most financial firms employ at least two cloud providers, storing different data in each. So the EDM Council’s Cloud Data Management Capabilities framework outlines a way by which—instead of trying to shoehorn all data into one golden copy—firms can create multiple master data stores and use cataloging technologies to manage what data resides in which store.

“If you have distributed systems, you can utilize a data fabric or data mesh distributed architecture with catalogs that bring together the data and ensure its integrity,” Meriton says. With a data mesh, each area creates an API to the data fabric for more flexible access to the data. While these types of data architectures are less pervasive among the buy side compared to tier-one investment banks with greater resources, they are quickly becoming more mainstream, he adds.

In response to requests from its membership, the EDM Council is now embarking on the second phase of its CDMC framework, establishing a working group that will delve into four specific topics—data masters, data marketplaces, data sharing, and data analytics—over the next six months, share best practices for each, then publish guidance on how firms can practically address each.

While this won’t solve firms’ golden copy challenges in one fell swoop, it will help provide a path for firms to address them as part of existing cloud migrations.

“It doesn’t mean you have to move all your data to the cloud. It means you can create new cataloging techniques for master data and golden copy management in a heterogeneous cloud environment,” Meriton says.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Removal of Chevron spells t-r-o-u-b-l-e for the C-A-T

Citadel Securities and the American Securities Association are suing the SEC to limit the Consolidated Audit Trail, and their case may be aided by the removal of a key piece of the agency’s legislative power earlier this year.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here