Virtues of Consistency

michael-shashoua-waters

Philosophers and authors may dismiss consistency as an overrated refuge for unimaginative minds, but when it comes to industrial strength data quality efforts, it proves to be a necessity.

For reference data operations staff, identifiers, data quality, data governance and consistency are all intertwined-something that is very evident in the stories featured in the April issue of Inside Reference Data.

Even before data managers can consider trying an innovative technique such as crowdsourced data processing, as Nicholas Hamilton reports in "Crowd Control", they must address data quality and security concerns. WorkFusion CEO Max Yankelevich is an outspoken proponent of using the practice in the financial industry, takes pains to reassure us about security. He argues that breaking down data into such small pieces makes it unlikely, or even impossible, to leak sensitive data that would be whole enough to comprehend.

A March webcast on the topic of data governance, organized by Inside Reference Data and sponsored by Infogix and SmartStream, found that data quality, rather than cost savings, is seen as the foremost goal for data governance efforts. Even though, as Infogix's Bobby Koritala says in our report on this webcast, firms ought to be able to access data from multiple sources in system-independent fashion for data governance programs, controls are still necessary if consistency is to be maintained.

But why does data quality remain a challenge? Avox CEO Mark Davies asks this question in Industry Warehouse. Changes to entity details or legal structures can create inconsistencies in data, for one thing, he says. Similarly, Davies points to data governance policies as a means to get a handle on data quality. Centralization of processes, avoiding siloed storage and handling of counterparty data, is necessary for accuracy. Ongoing validation of data and working with peers on data cleansing and maintenance, using a collaborative approach, can reduce risk exposure, he argues.

Centralization (for consistency) is paramount in reference data operations at global data and messaging services provider Swift, as its head of reference data, Patrik Neutjens, describes. Swift is aiming to make that centralization a reality through its SwiftRef platform, as well as a know-your-customer or correspondent data repository. Neutjens echoes Davies' advice to pursue collaboration, noting how partnerships are an important asset for SwiftRef.

Several news stories in this issue show robust activity on identifier issues. The US Commodity Futures Trading Commission is proposing its own global transaction identifier; consolidated audit trails are on the agenda with a push from the Securities and Exchange Commission; and providers such as SIX Financial Information and Markit are ramping up Fatca compliance and swaps identification capabilities, respectively.

It all comes down to due diligence. Data consumers from investment firms have to be careful and deliberate about sourcing, managing and distributing data to support decision-making. Trying to speed through or cut corners on the complex mix of data management aspects just won't do.

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: