Historically, collaboration between banks hasn’t yielded much success—except, perhaps, when it comes to reference data. James Rundle talks with Peter Moss, CEO of SmartStream’s Reference Data Utility (RDU), about how the project came about, and how it’s going to help the industry in 2018.
What were the origins of the RDU, and what is it designed to do for the industry?
Peter Moss, CEO of SmartStream’s Reference Data Utility: The RDU is a reference data utility, as the acronym implies. It’s focused on reference data for financial products that trade. In essence, the focus is on building what the industry calls a securities master. The initiative was kicked off by a group of banks about four or five years ago under a project called Spred, which stood for Securities Product Reference Data. The banks were keen to make the process of building a securities master more efficient and cost-effective. They were all doing similar work inside their organizations, in terms of pulling together data from lots of sources and then massaging it into a form where it was accurate, complete and consistent. They basically said that this was silly and recognized that they were all pulling the same data from the same sources and then doing all this work on it, and it would be substantially easier if we actually had this done as a utility so that we could all benefit from the results.
So essentially it’s taking the heavy lifting out of the equation for the individual firms?
Moss: Exactly. There’s a lot of automation and a lot of best-practice adoption from within the tier-one banks we’ve been working with. The goal is to do it once, do it well, and do it for the industry.
This type of utility seems particularly relevant right now, given the advent of the revised Markets in Financial Instruments Directive (Mifid II) in Europe, and the requirements that will impose around reference data, reporting and other areas.
Moss: We’re very focused on the reference data that’s required to trade, and of course, Mifid II has brought out a significant requirement for reference data to support trading processes. So one of the things we’ve done this year is built out a reference data product that’s very specific to Mifid II, and it gives the larger trading organizations—those operating what Mifid II calls systematic internalizers—the reference data that they need to do all of their pre-trade price transparency, their post-trade reporting and their transaction reporting. Mifid II has given us a focus to extend the data out into that regulatory space.
We talk about the multiplicity of data sources a lot—whether it’s trading venues, regulators or other sources—but what actually goes on inside the RDU once data is ingested?
Moss: We source data from all of the places that a bank would normally source it from—a combination of data vendors, exchanges, specialist providers, and in the case of Mifid II, regulators as well. We identify the mechanism that we’re going to use to source the data from each of those, and the way the platform works is that on a regular basis, it goes off and acquires the data necessary, we then load it into the platform and keep the data from each source separate. We actually normalize it into a consistent form as we load it into our database, then cross-reference it all, so there’s a reliable way of making sure that reference data sourced from one location can be tied to reference data that is perhaps sourced from another location. A good example would be where we source some data from a vendor and some from an exchange, but when we put it together for a client, we have to ensure it’s the same instrument that’s being referred to. So the cross-referencing [functionality] is built for that purpose.
Then we apply a range of quality checks in an automated way. We’ve built in exceptions where, when we see quality issues, our data operations team can get involved and resolve the problem. The team offers a 24/7 service and they manage the flow of information through the platform. And then, for each individual client, we essentially build a distribution file that is unique to them. That’s because all of the clients will have a slightly different mix of data sources, so we pick the data that they need from the sources we’ve acquired it from, and then we build distribution specific to the customer based on the fields that they need from the vendors they’ve chosen. That last 20 percent is unique, the other 80 percent is broadly consistent across all customers.
IBM’s Kathryn Guarini and Bob Sutor look at how banks are currently experimenting with quantum computers.Subscribe to Weekly Wrap emails
- Wavelength Podcast Episode 132: Thasos Group’s CEO Talks Alternative Data
- Wavelength Podcast Episode 133: IBM on Quantum Computing
- SST Awards 2018 Winner's Interview – Broadridge Financial Solutions
- BBOD, GMEX Launch Hybrid Cryptocurrency Derivatives Platform
- Down to the Desk: OMGI Transforms Data Management Strategy