Although bleeding-edge technologies have focused on developing lightning-fast algorithms that cut humans out of the trading process, Max believes natural-language processing, and the often-overlooked fundamentals of search engine technologies, will play an increasingly important role as financial markets data enters the age of Big Data.
The market data industry is no stranger to high volumes of data, as rising message rates driven by a combination of higher-frequency trading, market volatility, decimalization and other market forces threaten to overwhelm the data and trading infrastructures of trading venues and market participants alike. While many efforts have focused on bolstering the capacity of their infrastructures, an equal amount of effort has been spent on analytical tools to process and filter these increased volumes of data to derive useful insight—i.e., the more data you have, the more tools you need to make it useful. Or, mo’ data, mo’ problems.
To get the most out of structured data, firms are employing new database technologies to process, store, correlate and retrieve Big Data faster than traditional databases. Meanwhile, non-price data volumes are also increasing as firms try to derive value from semi-structured and unstructured data, and look for ways to leverage data hidden within their enterprise, and harness public information available on the internet.
For these functions, a new breed of search tools is emerging to provide broader information search and discovery. For example, Epam’s InfoNgen business developed an appliance for internal content discovery and aggregation in 2009 to capture data held on internal servers, in email messages, broker research and other documents, or the internet, which could then be combined with other data. Only a few years ago, complex-event processing engines emerged as a lightweight way to monitor multiple data streams to identify optimal trading circumstances. But now, the influx of new data sources that firms believe will deliver a trading edge is prompting interest in more heavyweight data-processing solutions, such as IBM’s Watson supercomputer, which can be used to process vast volumes of data for investment analysis or to gain a better view of risk associated with clients, for example. But Watson also features natural-language processing capabilities that enable it to understand unstructured data, while the platform is also self-learning, so the more data you give it, the better results it generates. Or, mo’ data, mo’ money.
Natural-language processing serves dual purposes: On one hand, it allows computers to understand unstructured data—such as text content like news, research, blogs or social media posts—rather than merely numerical values, thus providing context and nuances to figures that may not tell the whole story. On the other hand, it can also allow traders to query today’s vast amounts of data using plain English, rather than having to remember combinations of codes or shortcuts. For example, Thomson Reuters designed its Eikon terminal with this in mind: that tomorrow’s traders would be more familiar with web searches than with the clunky function keys of olde. Its latest iteration, version 3.0, incorporates natural-language search for querying the terminal’s data and analytics. So if you type “Apple Samsung 2001 to 2012 market cap and macd” into Eikon, the system will display moving average convergence/divergence charts for both symbols on a single graph.
Thomson Reuters isn’t alone. Wolfram Research released its Wolfram Alpha online “computational knowledge engine” in 2009, while Austin, Texas-based financial search startup 9W Search enables financial advisers, analysts and researchers to search and compare company financial data from public sources filings, and other data sourced from Edgar Online. Another newcomer to financial search is Quandl, which started 2013 having indexed 2 million freely-available financial and economic time-series datasets, statistics and indicators from exchanges, clearinghouses, trade organizations, media outlets, central banks, regulators and government bodies—and recently doubled its data coverage—and provides for free what other vendors charge to deliver, making it both appealing and disruptive.
The influx of new data sources that firms believe will deliver a trading edge is prompting interest in more heavyweight data-processing solutions.
Now imagine if these services were all made available within the app store of a web-based terminal and tied together with a unified search mechanism that links and cross-references their data. Because that’s where I believe the financial data desktops of the future are headed, with search as a central component.
Julie Lerner joins to talk about the hemp market and PanXchange's launch of a hemp exchange.Subscribe to Weekly Wrap emails