High-frequency and algorithmic trading have changed the playing field for low-latency data, leading to the creation of dedicated, high-performance infrastructures. But, warns Max, firms can’t afford to treat these in the same way as other data platforms, because while some principles still apply, firms must be more careful and flexible around contracts for high-speed data services.
In high-frequency, low-latency markets, you might be forgiven for thinking the old adage, “Marry in haste, repent at leisure” no longer applies. You’d be wrong. Data and trading are faster than ever, but that doesn’t mean any decision should be rushed, whether it be a decision to place a trade, or a decision to buy a new service.
While participants in Inside Market Data’s recent latency webcast extolled the benefits of achieving ultra-low latency market data delivery, they also warned that in some circumstances it can be better to sit out a period of unexpected activity—even if it means that capital goes unused for a period—that a trading algorithm may not be prepared for, or where price movements are not clear: better to not make money than to lose it by a mistake.
“The faster you know [when not to trade], the better you can manage your risk from that point of view,” says Alejandro Canete, head quant developer at Pan Alpha Trading. But to be in a position to make that choice, you must first have a low-latency infrastructure that lets you trade—or not—competitively. In many cases, this has resulted in firms building high-performance architectures that run separately from their legacy market data platforms.
“The guys with a single algo are measuring everything in nanoseconds, while the guys who manage strategies across multiple algos or maybe the risk around this … may be in the microsecond or low millisecond range. It’s the ecosystem of players who need their data in slightly different places,” says Barry Thompson, CTO of Tervela. “So you end up with a parallel market data plant … which people are dipping into for a different class of data.”
This doesn’t mean that firms should treat their low-latency content and data architectures differently. In fact, they should pay extra careful attention when dealing with the new paradigms of low latency, as administrative errors—just like bad trades—can be accelerated in high-performance infrastructures. Say, for example, you accidentally allow 10 traders access to data that they aren’t authorized to view. You could expect a big penalty fee from the aggrieved exchange, with charges back-dated to when your lapse in control began. Now, say you allow 10 algorithms access to the same data: The amount of trading an algorithm can perform compared to a human trader—and hence, the value it derives from the data—is multiplied, so an entitlements error in your low-latency architecture could be far more costly than one relating to the infrastructure that serves the rest of your business. Plus, these per-application fees are typically much higher than ordinary per-user fees to begin with.
This requires comprehensive understanding of exchange policies on data usage—which has prompted the creation of databases such as The Exchange Guide (now owned by data technology vendor B2N) and Ballintrae’s Exchange Rules and Regulations Database—to ensure compliance with the rules governing their data.
However, firms must also change the way they approach data in a low-latency world, and contracts must adapt to reflect this, says Paul Hinton, commercial technology partner at London-based law firm Kemp Little LLP. Not only must contracts reflect the demands of a low-latency environment where a small delay can be as critical as an outright system failure, meaning that service-level agreements must be more granular and adhere strictly to specified performance levels; they should also take into account that firms need to be more agile, and that contracts must be more flexible to accommodate this. For example, Hinton says, firms must be free to cancel a service in favor of a faster one as vendors compete over performance, or to run competing vendors alongside each other and adopt a “pay-as-you-go” model, depending on which is faster—and hence, which the firm uses on a particular day.
Low latency is about being able to make the right decisions quickly on the most timely data. Without the “right decisions,” you’re making bad decisions, faster. And a low-latency infrastructure without the right policies and administration frameworks in place is like a dog chasing its tail—endlessly expending resources to chase something unattainable. It can go as fast as it likes, but it’ll never catch it.
Anthony and James delve into how the systematic internalizer regime is shaping up, and then examine the regtech sector.Subscribe to Weekly Wrap emails
- Waters Rankings 2017: All the Winners & Why They Won
- Waters Wavelength Podcast Episode 83: Systematic Internalizers & RegTech
- Former NLX Chief Takes Interim CEO Job at UK Fintech Body
- Mifid Gears Grind into Motion as Systematic Internalizers Emerge
- Power to the People: Will MiFID II Data Disaggregation Deliver on Cost Control Promises?