Data Standards Competition Heats Up

michael-shashoua-waters

Last week saw a great deal of activity in the data standards model field, or at least a lot of publicity from competing models. Open Data Model, a newer organization than the well-established EDM Council, set out classifications based on asset class distinctions as the basis for its model. Open Data's model is based on the ISO 10962 standard and the same organizing principles as Wikipedia, according to Rodger Nixon, chief executive and founder of Open Data Model.

Open Data Model's membership steadily grew in the latter part of 2011, according to Nixon, who contends that his organization's offering and methods are better and more effective than those of the EDM Council. "There are standardizations of the messages and transactions, but our field isn't transactions, it's reference data," he says. "It's not a relational model."

Multiple classifications provide a basis for better dimensional models for data analytics, states Nixon. "You can see the instruments underlying the derivatives," he says. "You're trying to see the underlying instruments in derivatives and ‘explode' them out [for analysis]. What's your exposure? What are the simplest components, the ones that have an effect?"

Meanwhile, in a webcast by EDM Council, managing director Mike Atkin emphasized the uses of its Financial Industry Business Ontology (FIBO), which is not necessarily or strictly a data model, as he describes it. "It's a formal and factual definition of reality, done by subject matter experts and validated by the industry," says Atkin. FIBO will have three versions, for business entities, for instruments and for loans. Beyond ontology, however, says Atkin, semantic processing will be the future of data management, because it can bridge a gap between simple data dictionaries and ontologies.

The EDM Council worked with the standards body, Object Management Group, on FIBO, and has done a proof of concept for semantic processing using derivatives and FIBO, says Atkin. This effort will allow compliance with regulatory demands to be able to take in FpML, pull data from relational databases, analyze and classify data, and analyze links to counterparties, he explains. The EDM Council's semantic processing plans show promise for pulling together different data standards. The specifics of these plans include implementation in XML format and possibly metadata annotations for derivatives, as Atkin mentions.

The question that the positioning by Open Data Model and the EDM Council raises is which approach is the right way to go for data standards modeling? Is it emphasizing organization or classification by asset class? Or is it linking the proverbial alphabet soup of acronym-named formats together? I'd like to hear your thoughts on this. Please post replies here, or you may also respond via Inside Reference Data's LinkedIn discussion group, where this column will also appear.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here