Now that it’s becoming prohibitively expensive to truly be among the fastest players in the low-latency space, firms are finally realizing that trading isn’t a 100-meter sprint, but more of a “tough mudder,” and that speed alone isn’t a sustainable differentiator unless it’s accompanied by strategy.
Low latency isn’t a strategy; it’s an enabler of strategies. It makes you faster, not better. And over recent years, it has become the liquid courage of high-frequency traders, convincing some that they get smoother and smarter with every swig of latency they can drain from the glass, and that they can pick up any cute trade hanging out on the exchange book.
But now, firms are examining the rising costs of achieving low latency, especially in light of the need to spend elsewhere, on items such as improving data governance—an area where 67 percent of investment managers have weak processes in place, according to a new survey from benchmark data provider Rimes Technologies, conducted by consultancy InvestIT.
Hence, firms are more closely scrutinizing the role of latency—and how effective it delivers returns. And as it becomes harder to improve headline speed, the industry will step up its focus on other areas related to latency, said Azul Systems CTO Gil Tene, at the recent Inside Market Data Chicago conference, such as measuring and improving variability and predictability—something that CME Group is currently focusing on. Indeed, a whole industry has built up over the years around monitoring latency and jitter—though according to Peter Lankford, director of the Securities Technology Analysis Center, who presented at last week’s Buy-Side Technology conference in New York, many firms aren’t happy with the results and lack confidence in their levels of time synchronization.
Think of these efforts as tuning a high-powered engine. Low latency is like having the fastest engine, but a car also needs fast chassis and aerodynamic design, as well as a team boss to decide race strategy, a pit crew to change tires—ever-more important as algorithms’ lifespans get shorter and must be replaced more frequently—not to mention a driver who can execute the race strategy. Because races aren’t just won in a drag race on the straightaway, they’re won through teamwork, racecraft, and—just as in trading—the ability to spot opportunities and enter and exit positions to haul yourself up to the top spot ahead of the competition.
“It doesn’t matter how fast I am if someone makes a better decision around quantifying the markets and trades 30 seconds before me—your strategy is the best way to get faster, not hardware,” said Peter Nabicht, chief technology officer of Allston Trading at IMD Chicago.
Of course, if you can trade 30 seconds before a rival, you aren’t chasing ticks, but rather reading the trends: something that those with longer time horizons—from intraday to days, weeks or months—are well accustomed to.
Latency-sensitive firms that can’t stomach those horizons will likely move from a narrow focus on latency to looking at latency across every component of the data and trading process. For example, says Rob Walker, chief technology officer at FPGA-based feed handler vendor xCelor, “We work with clients to understand their problems—there’s no sense in using an FPGA card that saves microseconds if you are getting data in a way that introduces milliseconds.”
In short, low latency isn’t cheap and isn’t easy, but isn’t going away, either—instead becoming a basic requirement of trading, in the same way as order management systems and connectivity. Those prepared to settle for “good-enough” latency will expect more capabilities at the same low speeds, just as xCelor is shortening switching times of Altera hardware, to enable it to perform normalization processes on data that would previously have added extra latency. Going forward, it won’t be the raw speed, but how much you can do within those timeframes, that will make a difference.