The low-latency arms race has spawned a mini-industry of latency management and network monitoring. But it has also produced an intense focus on performance: when you’re measuring speed in microseconds, you expect data to be delivered and processed equally fast—or what’s the point of investing in latency monitoring appliances and network taps? So the question is, how can you use these metrics once you have them?
An obvious example is using latency metrics as an input to trading strategies. For example, to spot latency arbitrage opportunities between trading venues. Or more likely, to block trading activity destined for a trading venue whose latency exceeds a specified limit, and where you risk trading against stale data and not just missing your desired trade, but getting picked off at a disadvantage if the market moves in the meantime.
But these metrics aren’t just useful as inputs for developing and executing trading strategies; they are also critical for operational and administrative issues. For example, network monitoring determines traffic levels and can assist in capacity planning exercises, which is important for anticipating the volumes of data that would be generated by, say, entering a new asset class; for assessing the impact of new rules that require traders to show all the liquidity behind a limit order on over-the-counter electronic venues such as OTC Markets (see story, page 1), or just for dealing with everyday peak volumes, to keep the business running smoothly in the event of sudden bursts of activity that could clog a firm’s networks and paralyze its trading.
In addition, they can serve a useful purpose for holding service providers to the terms of their contracts and monitoring compliance with service-level agreements—something that Chicago-based MKAdvantage is using VSS Monitoring’s technology to provide for clients sensitive to latency, dropped messages and other key performance indicators (see story, page 1). After all, how do you know if your vendors are performing? These are the kinds of services that can tell you how much data is being delivered, at what speed, and how much of the time those levels are achieved or missed. Though it may be little comfort if a vendor falls short of the standards required, firms can use that data to force improvements in service or to renegotiate contract terms if they feel they are getting poor value, or—in extreme cases—to justify breaking a contract.
Meanwhile, from the vendor’s point of view, the more granular the data provided, the more detailed information they can provide to the end-user to justify the performance of individual services. Equally, everyone wants a monitoring system that isn’t just functionally rich and capable of precision monitoring; they also want one that’s easy to use and understand, which is why London-based TS-Associates has hired a new head of product design to update the user interface for its TipOff latency monitoring appliance (see story, page 8).
Firms also want to be able to monitor as much information from as many sources as possible using the same systems, to correlate business issues such as trading activity with IT issues, as is possible using ITRS’ new FIX plug-in for its Geneos systems monitoring platform (see story, page 7). For example, imagine the value of combining a tool like Thomson Reuters’ Equity Market Share Reporter (see story, page 8) with market-by-market latency and traffic metrics to determine how liquidity in certain stocks changes during peak times or when a specific venue experiences latency, and to predict where you can get the best price, fastest, to fulfill your strategy if a stock’s place of listing experiences delays.
And as it becomes harder to achieve competitive advantage from latency, and firms battle over ever-decreasing increments, finding new ways to analyze this data and couple it with other content looks set to become increasingly important in future.