Machine Learning: Hype vs. Reality

Waters takes an in-depth look at advancements made in this field of artificial intelligence and where there's still room for advancement.

ai-robots-waters0316

The evolution of artificial intelligence can be reasonably tracked by the conquests of human-made games by machines. In the 1950s, tic-tac-toe was vanquished by Alexander Douglas’ OXO program. In the 1970s, backgammon world champion Luigi Villa was dispatched by a Carnegie-Mellon University-developed computer program dubbed BKG 9.8. In the 1990s, IBM’s Deep Blue bested chess grandmaster Garry Kasparov, and Michael Buro’s Logistello computer program destroyed Othello world champion Takeshi Murakami. 

Also in the 1990s, Chinook, a computer program developed by researchers at the University of Alberta, was crowned the checkers (draughts) world champion, though there’s a bit of controversy about that one as Marion Tinsley—arguably the greatest checkers player ever—had to withdraw due to treatment for pancreatic cancer. And in 2011, IBM’s Watson thrashed Jeopardy! champions Ken Jennings and Brad Rutter.

But there was one game that had perplexed researchers and stood as the last domain where humans ruled over computers: the 2,500-year-old Chinese board game, Go. Google researchers working in London had been trying to tackle the intricacies of Go for the better part of a decade and the team thought they were at least another decade away from evolving their AlphaGo program to beat the best humans in the world. It turns out, however, that they were much closer than they had originally thought.

In late January 2016, the AlphaGo research team published a paper in the scientific journal Nature describing how three months earlier, in October 2015, their AI defeated European Go champion Fan Hui 5-0 in the formal game of Go, which has longer time controls, and 3–2 in the informal game, which has shorter time controls. To put the complexities of Go into perspective, Demis Hassabis, a developer on the AlphaGo team, told Wired magazine that in chess there’s an average of 35 moves at any given time, as compared to Go, where there are 250 moves on average during any given turn. “As Hassabis points out, there are more possible positions on a Go board than atoms in the universe,” according to the article.

Revelation

“I think the biggest thing is we’ve had so many different techniques—everything from neural networks to random forests—that the biggest benefit recently has been in the combination of models, or an ensemble of models.” Blair Hull, Hull Investments

AlphaGo is a revelation because it uses a combination of AI and machine-learning techniques to attack a varied problem. According to the Nature article, the program uses deep learning techniques that combine “supervised learning from human expert games, and reinforcement learning from games of self-play.” The neural networks look at board positions and use Monte Carlo tree-search programs that simulate thousands of random games of self-play to decide on the best move. This allowed it to be more efficient than previous AIs and lessen brute-force computations such as that of AIs like Watson.

The Google researchers thought they were a decade away from besting Go … and then it was conquered. It’s these advancements in and around machine learning that have the capital markets buzzing. The savviest firms have already begun to tap into the potential of big data. That’s been buttressed by increased capacity and improved data collection, storage and dissemination techniques. And predictive analytics have improved beyond multiple regressions to include strategies such as dimension reduction, which was seen with AlphaGo.

The possibilities for incorporating machine learning into the capital markets are numerous. But even as many vendors and industry experts like to throw around the term, the hype has somewhat surpassed the reality of what’s currently available. But just as how AlphaGo matured faster than anyone had expected, no one in the capital markets wants to get left behind as the quant space has become more competitive and investment managers are struggling to find new sources of revenue.

A Fool’s Errand?

When it comes to machine learning, one interesting strategy that has found a niche in the financial services industry is that of market timing. When talking about market-timing strategies, there are two distinct camps: those who—like Nobel laureate Robert Merton—believe it’s a “fool’s errand” and those who believe it’s the last true way to find outperformance. 

Blair Hull, founder of high-frequency trading (HFT) firm Hull Investments, is solidly in the latter’s camp. He scoffs when people say that no one can time the market. He believes that this explosion of data combined with new machine-learning techniques make market-timing strategies even more viable.

“I think the biggest thing is we’ve had so many different techniques—everything from neural networks to random forests—that the biggest benefit recently has been in the combination of models, or an ensemble of models,” Hull tells Waters. “So you don’t just have one model—you have multiple models that you use. That’s the biggest advancement that’s come in recent years.”

Because Hull Investments is in the HFT space, it has had to invest heavily in its infrastructure. When Waters spoke with Hull in late November last year, the firm had just undergone a major server upgrade because it uses multiple models and needed more processing power.

“This isn’t an easy thing to do,” Hull says, adding that the market-timing strategy is anywhere from 150 percent long to 50 percent short. “It’s gotten roughly double the returns with half the risk. Just as it was considered irresponsible to time the market 30 years ago, I believe in the next 30 years, not to time the market will be considered irresponsible,” he says. And that is thanks largely to advancements in the field of machine learning.

Not for the Faint of Heart

There are many techniques that can fall under the umbrella of artificial intelligence, and, more specifically, machine learning. If you break it down, these terms and the techniques underpinning them can easily be separated and written about on their own merits. But for the purpose of this feature, we will roll up many of them under the machine learning banner, since CIOs and CTOs in the capital markets space most often refer to machine learning, even if the term has become nebulous outside of scientific circles. 

When it comes to machine learning, there are supervised learning techniques—such as neural networks, decision trees, k-Nearest Neighbors (k-NN), and linear and logistic regressions—where humans participate and train the algorithms. Then there are unsupervised learning techniques—such as clustering—where the algorithm is given data and an objective, and it will reach or not reach the objective, although you won’t know how it got there.

Joséphine de Chazournes, a senior analyst at consultancy Celent, says that computing has improved vastly in recent years and has made machine-learning systems more powerful in the process. She says that machine learning works well for trading on algorithms—big statistical market data models that improve themselves through learning techniques—and as such, works well for HFT strategies and market-making.

There is some level of faith that has to be put in these systems and strategies. For some strategies, such as those for cash flows or for statistical arbitrage where there’s a short-term time scale, machines are better suited to the task at hand and humans should not intervene.

Hull says this isn’t for the faint of heart, and there is significant investment, both human and technological, that has to be made.

“This isn’t an easy thing to do,” he says. “It takes a fair amount of effort to give you a little bit of an edge.”

Trust

Speaking at a Waters-hosted event in November, Lauren Crossett, director of business development at Rebellion Research, considered one of the leading AI quant funds in the US, said that even though errors are made, they have to trust the AI and not override it.

“We might look at all the other funds that sold out when something starts to go down and say, ‘Ok, well we know where the fork was in the road. Let’s see how it comes out,” Crossett said. “We pretty much depend on what the AI said. It’s not to say if we think there is something wrong with the data we’re not going to rerun the system. But we’re not going to say, ‘Hey, I have a good feeling about this, so let’s overweigh something.’ We’re not going to do that.”

Erez Katz, CEO of Lucena Research, which provides predictive analytics using machine-learning technology, adds that the benefit of these platforms is that it takes human emotion out of play, which can be beneficial when it comes to execution as opposed to research.

“Machine learning takes the human’s emotion out of the equation,” he says. “That’s very important because human psychology plays an important role in most investment decisions and it often does a disservice even to the most seasoned portfolio managers. Active investors, whether they admit it or not, often buy a stock for the wrong reasons and exit under duress due to their lack of confidence in their initial decision to enter. Having a scientific backing provides the statistical affirmation and confidence in a trade, but more importantly, it eliminates the emotional angle of the decision process.”

Vendors Enter the Fray

When it comes to research and trading strategies—as opposed to anti-money laundering, clearing and post-trade processes—the machine-learning space has been largely dominated by HFT shops and other more sophisticated alternative asset managers. But 2016 appears to be the year that many vendors are looking to attach their names to “machine-learning platforms” much in the same way that vendors went from shying away from cloud-based solutions in the early 2000s, to being all-in.

One of the leading vendors in the space has been Portware, which bought predictive analytics firm Aritas Group in 2012 to bolster its machine-learning and decision-support tools. (Portware was acquired by FactSet in September 2015, but still runs independently.)

Portware’s Alpha Vision platform aims to simplify algo selection and serve as a decision-support tool, says Henri Waelbroeck, director of research for the company, who has been involved in the field of AI for 25 years. Most recently, Portware has looked to employ machine learning for trade execution to allow for more intelligent con­trol of execution algorithms that interact with the order flow in an efficient manner to avoid the negative impact of HFT and other parasitic activities in the market, he says.

As an example, if you’re looking to predict how to optimally execute a trade, you need to understand volatility. The problem is that you can’t predict volatility if you’re not aware of what’s going on with the order flow. “So we’ve built an engine that predicts volatility,” he says. “The volatility predictor knows what the volume and order flow predictors are thinking, and vice versa. It’s a cooperative prediction engine, where all agents are predicting one thing, but they’re aware of the opinions and thoughts of the others.”

Alfred Eskandar, CEO of Portware, adds that machine learning and predictive analytics will become the norm as the capital markets grow more electronic. But it’s also not a zero-sum game where firms will either have to go fully automated or remain fully manual. For some strategies, firms will let the machines do their thing, while for others there will be more hands-on oversight. The idea is to make the trader as efficient as possible.

“Markets are globally linked, massively fragmented and fully electronic—it is impossible to keep up with those markets and take advantage of opportunities without the ability to have decision-support driven by predictive analysis,” Eskandar says. “At the trading desks of asset managers, these machine-learning tools are bridging a massive gap between the idea generator—the portfolio manager—and the idea executor—the trader. We’re seeing less distance between the two because of these tools and analytics.”

Still Barriers

There are still both mathematical and technological barriers when it comes to machine learning, as well as philosophical. Hull notes that it’s important to create a system that avoids forward-looking bias, where you “fit” the data into the process and taint the output.

There’s also the so-called “curse of dimensionality,” where you have too many indicators and not enough observations to look at a combination of models. And you have the complexities of combining techniques—such as random forest, neural networks, bagging and boosting, or any number of regression trees—and running them on the same models.

There’s a fair amount of computing power involved and it’s easy to get lost in data. It can be an expensive endeavor and while you’re building a platform that can learn on its own, there are a lot of humans involved in getting to the “machine learning” state. For proof, consider that the AlphaGo team had 20 names listed on the research paper, and who knows how many more assisted at various points in its development.

Snooping is another area that firms need to be cognizant of, says Tom Doris, CEO of OTAS Technologies, which uses machine learning techniques to extract information from large sets of market data. 

“Snooping is where you accidentally allow your model to see data that in real life is from the future,” he says. “It’s surprisingly easy to make this mistake. For example, beware anyone who tells you they have a fantastic quant model that makes great money in their back-test on the S&P 500. If they used today’s S&P 500 and back-tested over the past 10 years, they’ve got a huge selection bias and they’ve slipped their model the secret knowledge of which companies will be the constituents of the S&P in 10 years time. If we knew that, we’d all be on a winner, no machine learning required.”

But, firms shouldn’t shy away, either. Machine learning’s greatest promise is in enhancing the trader or portfolio manager’s ability to do their job; it won’t just cut them out of the picture.

“I think the future is going to be more like ‘RoboCop’ than ‘Terminator,’” Doris says. “You need a human to figure out the ‘why’ and ‘what might happen next’ because those questions require an understanding of human nature that we are a very long way from being able to encode in a machine. But the machines are great at identifying the situations where there is something unusual happening in the first place.

“It’s not efficient to have an experienced trader spend a day watching prices tick around his screen,” he continues. “The machine should be telling him when something happens that’s unusual and warrants his attention. It’s like aerial combat before radar and missiles: the pilot spends their time desperately scanning the sky, so they’re already fatigued when it’s time to fight, and they can easily miss a threat. With the right market surveillance analytics, you have the radar that tells you where to look.”

Third-Party Builds

The co-founder of a proprietary-trading firm says that in the future, he sees more players looking to third parties to build these systems rather than make the investments themselves.

“There’s a lot of work involved and portfolio managers just don’t have time; there’s no way,” he says. “Think about the level of expertise required just to start something like this: You need data scientists and to get the data infrastructure right, package it, understand what it all means, research … just like everything else on Wall Street, it’s going to get outsourced. Just tell us what to do—that’s how most quants look at it.”

And as with any technological development, firms will have to wade through the hype and mischaracterization of what is and what is not machine learning. Patrick Flannery, CEO of capital markets technology provider MayStreet, says that what Google and IBM are creating in the machine-learning and AI space is one thing, but Wall Street isn’t quite there yet. He says machine learning is useful for some simple techniques, but it’s still a lot of brute force and the large-scale simulation of intelligence.

“You have low latency and people still struggle with it. Then you have big data, and it’s a huge, huge, huge practical problem to get the data stored and use it,” Flannery says. “Being able to ask a question that you formulate and then get an answer out of your infrastructure in a reasonable time? That’s beyond 99 percent of all firms right now. Then beyond that is cognition—having something else formulate some of these questions—that’s what machine learning would be: Can a machine come up with a thesis for me to test? That would reduce my latency by a massive amount.”

Flannery is obviously skeptical, although people were skeptical as to how quickly AI could advance to best a human at the game of Go. Then it happened. 

Salient Points

  • Due to the proliferation of data and data sources, and thanks to rapid advancements in the predictive-analytics space, machine learning is seeing more investment from capital markets firms.
  • While humans should not be cut out of the process, there is some level of faith that has to be put in these strategies or else it defeats the purpose and can lead to lost performance.
  • More vendors are looking to attach their names to the machine-learning craze, but not every provider will have the expertise necessary for such a complex endeavor.
  • This is still largely a game of scale, and the investment necessary will likely continue to scare many potential entrants.

 

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Waters Wrap: The tough climb for startups

Anthony speaks with two seasoned technologists to better understand why startups have such a tough time getting banks and asset managers to sign on the dotted line.

FCA declines to directly regulate market data prices

A year-long investigation by the UK regulator to determine whether competition is hindered in the wholesale data markets has concluded with its decision not to directly regulate much-maligned data pricing and licensing structures.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here