Data professionals are like bartenders: expected to be at their station until the wee hours, ready to throw together something for the most demanding customer at a moment’s notice, using quality ingredients and at a reasonable cost. They are expected to recommend beers and wines, know when to change a keg or bottle, and recall even the most obscure cocktails, delivering perfection every time.
Many take for granted that their data teams can be relied on to deliver remarkable data cocktails—and not just creating them, but also cleaning up afterwards—but I’ll bet everyone can remember where they were served the best drink of their life, or where they’ll never go back again.
Any master mixologist knows that cocktails aren’t just about measuring ingredients. Each ingredient has nuances, just as each brand of spirit has a different taste. Which flavors taste good (or bad) together? Should you use ice, which dilutes alcohol as it melts, but can also help release aromas? Shaken or stirred? And finally, what glass to serve it in? Even this—just as for wines and beers—can impact the flavor. A good data mixologist knows that each element is crucial—from the basic ingredients (content) to mixers (corresponding datasets) and flavor enhancers like juice, bitters or ice (think analytics) to how you serve the finished product (feeds, desktops or spreadsheets, for example)—to its consumption.
A master data mixologist must also know not just the ingredients and weightings, but also every acceptable substitution, in case they cannot obtain a particular dataset, or if doing so would be too expensive. What alternatives will deliver the same result? For those who don’t know this off the top of their head, there are services like Diliger, which allow users to research and compare vendors and their data services, and which has recently made the vendor research process easier, and now also allows you to outsource the time-consuming parts of that process to its internal team of analysts.
Of course, you don’t master mixology overnight. It can take years to learn the recipes, let alone the nuances of each ingredient, plus the patience and etiquette to serve all customer types. And in the data world, understanding how to use data in specific market circumstances is a talent that comes only with years of experience. However, as platforms like StockViews begin to invest more in artificial intelligence and machine-learning technologies—in StockViews’ case, to improve the quality of its research—these technologies are taking some of that burden away from data and trading professionals.
But once you have the experience, you’ll not only know your drinks; you’ll also know which drinks will suit the tastes of specific customers. For example, Thomson Reuters has released a version of its Eikon desktop tailored to the needs of buy-side clients, by including previously separate datasets, such as estimates and data from StarMine and Lipper.
And of course, a good mixologist can’t afford to dither, or ponder or savor ingredients: make that drink in 30 seconds, max, or lose the customer! So we garnish this week’s issue with a wedge of Lime—Lime Brokerage, that is—which understands the value of speed, and is in the middle of a series of investments in technology to manage and reduce latency across its data and trading systems, led by chief technology officer Suresh Thesayi. Lime’s aim: to ensure its clients—from those content with normalized feeds to those requiring ultra-low latency—can execute winning strategies.
And speaking of winning, nominations for this year’s Inside Market Data and Inside Reference Data Awards are now open. This year, we have some new categories, we’ve dropped some old ones entirely, and others have changed from being part of the online poll to being call-for-entry awards where individuals or companies can submit themselves. So spare a thought for our hardworking data bartenders. Please vote for and nominate those who have delivered exceptional service over the past year. And don’t forget, tip well.
Rich Newman joins to talk about challenges facing the alternative data space and why open data is becoming increasingly important.Subscribe to Weekly Wrap emails