To compete in today’s markets, firms must process increasing amounts of data, while avoiding the cost and complexity that new feeds and sources typically entail. Philip Sparacino, executive vice president at Collaborative Software Initiative, says data abstraction layers can address technical and cost-related challenges, while allowing more flexible data sourcing.
As the need for market data grows across financial services, the cost of data, infrastructure and application development is rising at an alarming rate.
Many disparate data services, some built on proprietary platforms, sit at the core of the world’s largest trading environments, primarily because of the way data has been sold, distributed and consumed over the years. The industry now faces numerous opportunities to improve efficiency, lower cost and maximize the business use of enterprise market data.
One solution emerging across the industry is the concept of abstraction layers that normalize data across multiple vendors and extend a common enterprise market data platform to their consumers. (Full disclosure: Collaborative Software Initiative offers one such solution, Market Data Abstraction Layer, aka MDAL).
Financial firms consume—and have helped data providers create—a multitude of products to cover every aspect of data usage, from real-time, ultra-low-latency data from trading venues for programmatic and institutional trading, to static files for research, analytics and other functions tied to corporate actions and reporting. While market data providers now offer a range of different distribution methods, data distribution remains an ongoing challenge for data managers, who strive to build a common means of consumption to serve vast numbers of consumers.
In the past, data managers focused mainly on technology. They assessed the needs of the business and translated those needs to a data provider, who provided a technical solution. Today, senior executives oversee large market data investments and must not only understand the technology behind enterprise data management, but also manage internal clients and external suppliers like a standalone business.
In today’s highly competitive trading environment, both business unit leaders and technology managers look to their data management teams not only to provide them with data, but also for advisory services, such as what data is available in the marketplace, new data products, industry best practices, how data can be used to drive their business initiatives, and how to continuously improve data quality. At the core of providing sound consultative services to their consumers, market data organizations must also have a streamlined enterprise solution that enables them to acquire, aggregate and distribute data quickly and accurately.
Front-office data consumers in large firms tend to be segregated by asset class, each supported by a dedicated technology organization. These technology groups are the actual front-line consumers of market data. As such, their end-user, latency and application requirements determine the number of data services and delivery methods they use. To complicate matters further, some technology groups procure data both from existing internal data assets and directly from data vendors and exchanges. This is an ongoing battle between market data and technology groups, and these very practices have further embedded proprietary platforms and data models and redundant data services across financial institutions.
Adopting a high-performance data abstraction solution directly addresses the long-awaited need to aggregate and normalize data across multiple vendors, allowing data managers to mix and match services from different vendors for specific needs, creating true best-of-breed data architectures, and to add or displace vendors as required. With the ability to seamlessly interchange data services regardless of their origin, financial firms will eliminate duplication of data services, accelerating the phasing-out of non-scalable legacy platforms. As a byproduct, this will also foster competition among data vendors that only benefit firms as they continue to consume massive amounts of data.
The resulting reduction in application development maintenance—since frequent changes to data services will be managed centrally at the abstraction layer and not on every data-consuming application—will also generate substantial savings. This is mainly because with MDAL, for example, downstream applications will write to one API, and the data provided to end-users will be vendor-agnostic. As a result, market data organizations will significantly reduce time-to-market—for example, to expand into new asset classes—since multi-asset class data can be delivered to the business without needing to buy and implement an entire consolidated feed, which ends up being only partially used, to obtain a specific dataset.
As well as building a platform that will scale to the ongoing needs of the business, a market data abstraction layer establishes a consistent and sustainable global standard that enables data managers to provide the best available data, on-demand, at the lowest cost to their internal clients.
Finally, by working together, financial firms could leverage both the competencies of their technologists and subject matter experts in the interest of building a low-cost, industry-wide standard for market data abstraction. The industry-wide adoption of a data abstraction layer will address key factors that continue to drive up overall IT costs, and will also bring much-needed structure and efficiency to an expensive area of financial services, ushering in the next generation of enterprise data management.