Sponsored by: ?

This article was paid for by a contributing third party.

Data-Driven Execution—Looking Back to See Forward

Looking back to see forward

Reviewing favorable outcomes and attempting to replicate them is by no means a new concept across the capital markets. Portfolio managers, execution professionals and risk managers use this principle to drive their decisions, although it is really only with the advent of advanced data analytics tools that they are now in a position to unlock the potential hidden within their vast data repositories with the view to monetizing it.

The truism that past success guarantees nothing is especially pertinent to the capital markets. However, when firms are able to understand the variables and decisions that contributed to successful outcomes, it is entirely reasonable to expect similar results if those variables are known and their decisions are systematized. From a business user’s perspective, the rationale is this: if this is what we did last time and the result was favorable, then it is reasonable to expect a similar outcome if the variables/inputs are similar or the same. This is one of the principles that drives firms on both sides of the market in pursuit of performance and delivering value to their customers and investors. 

In recent years, the industry has even witnessed the emergence of tools designed to monitor portfolio managers’ and traders’ behavior to understand how and why decisions were made, and then modeling that behavior in the interest of optimizing performance and consistency.  

A Simple Premise

The premise of systematizing execution decisions based on historical trade data is relatively simple. For example, if you’re a portfolio manager looking to buy a specific illiquid bond, you have two options. You can either do it how you’ve always done it—by assuming the banks you’re familiar with will be able to offer the most reliable liquidity in that bond, based on past experience, or you can inform and systematize your decision by looking back across the entire universe of securities you’ve traded over, say, the past three years, two days or even two minutes for more liquid instruments, where a significant number of those activities were more closely related to similar instruments than the bond you’re looking to trade (in terms of sector and issuer, etc).

The idea is to leverage all of that information along with recent axes you might have received from dealers, which, when aggregated, can provide the pre-trade market intelligence that significantly increases your chances of a favorable outcome. In short, your decision regarding how you execute that trade becomes more deliberate, more informed and significantly more repeatable. 

The International Securities Identification Number (ISIN) or issuer is not the only filter relevant to trading early in the morning or late in the afternoon that can generate different indications and decisions typically in terms of execution venues. The flexibility to explore data sets through multiple filters and axes is key to taking into account all the specifics of potential interest.

However, the execution of the premise is near impossible unless you have the technology to allow you to do it. In other words, accurate, reliable and transparent pre-trade market intelligence is only feasible with the requisite technology underpinning it. 

Stéphane Rio, Opensee
Stéphane Rio, Opensee

“The idea is that firms can leverage the value of their database and all the data sets they have been storing by filtering according to an ISIN or group of ISINs, an issuer or group of issuers, a maturity, a liquidity context or a certain time during the day,” explains Stéphane Rio, CEO and founder of Opensee, a Paris-based provider of real-time self-service analytics solutions for financial institutions. 

“They can look at their data sets how they want in order to generate this [pre-trade] market intelligence. Essentially, firms can decide in real time what information they want to see and how they want to use it, so that the outcome of the exercise is real best execution.”

The Challenge

There are a number of challenges facing firms that are generating pre-trade market intelligence, although pretty much all pertain to data. Data sets tend to be siloed across the business, and different asset classes invariably have their own trading desks, platforms and conventions, which makes automatically co‑mingling, normalizing and storing disparate data sets a complex undertaking. It is often even a challenge within the same asset class—some firms, for example, store the data relating to their fixed income repo and cash transactions separately. 

Similarly, axes that dealers send to investors tend not to be stored but, in the event that they are, they are often located in different repositories to where the trades reside, while orders and requests for quotes tend to be located in yet another location. This challenge is further exacerbated by the ongoing digitization and electronification of the industry, leading to a significant increase in data volumes, sources and formats—all of which needs to be ingested, processed and stored. “Storing all this data in a single location means you have access to all that information in a single place, which makes it dramatically easier to leverage the data and cross that information,” Rio says.  

The Way Forward

Traditionally, generating pre-trade market intelligence in real time or close to real time has been a pipe dream for all but the very largest market participants with the deepest pockets. Rio explains that Opensee’s strategy has two primary components: scalable technology and an intimate understanding of how data needs to be organized and stored so business users can leverage it how and when they choose to. 

“First, the solution must be scalable,” he says. “When you start to look at all the trades, all the orders, all the axes, all the information firms receive and store and historical ranges—and there is a lot of value in retaining all of that information in order to identify trends and patterns—that’s a lot of data, which is why scalability is so critical. But scalability does not mean you have to lose granularity or the real-time aspect, as that is where the value resides.”   

Scalability is all well and good but, if the data is not managed properly and made easy to analyze, users are not going to be able to fully realize its value. “There are many steps to take before data can become useful to a user,” Rio explains. “First, you need to enrich it, join multiple sources and design the optimal data model, and you need to develop processes to correct static data and automatically identify outliners and errors in live data. Once your data is well organized and cleaned, you then need to look for an easy way to build and iterate the right machine learning algorithm to achieve best execution. And when you are done, that’s usually the time you want to include even more data sets—you are therefore looking for an easy way to onboard any new data set on the fly without having to rebuild everything.

“One of our strengths in addition to tech is our understanding of what is required, and our product includes all those functionalities, which is where we believe we have a competitive advantage.”

  • LinkedIn  
  • Save this article
  • Print this page  

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: