For this report, Inside Reference Data asked participants in the Virtual Roundtable feature how data sourcing issues can help or hinder data quality. Their responses all seemed to drill down to the issue of ownership of the data.
BNY Mellon's Amy Harkins says the biggest challenge when sourcing data in a large firm is putting its assessment in the hands of a single business owner. Eagle Investment Systems' Marc Rubenfeld identifies control of the sourcing as the greater issue. Control of the data, he says, makes it possible to enrich, repurpose or otherwise apply it to reporting, benchmarking or analysis. Firms should "have complete control of it and truly own it," he says.
Looking at another issue that can affect data quality-the automation of data processing-it seems it doesn't necessarily mean that quality will be decreased. Rubenfeld sees automation as "critically important to data quality, as it helps to discover errors that wouldn't otherwise be discernible," he says.
HSBC's Chris Johnson identifies another benefit of automation-getting faster data validation checks. SIX Financial Information's Dominique Tanner says when there are deficiencies in automated data processing, they can be corrected so they do not reappear. He brings the question around to sourcing, noting that firms must understand how sources are delivering data to properly map data to the correct fields. That's the precursor to automation.
A data-centric processing and delivery model or a centralized data team could be the best ways to raise the quality of the data being produced. That appears to be the frame under which ownership and automation should be sorted out.
James talks about his trip to Chicago and some of the interesting topics that came up (including a look at disaster recovery demands). Then Anthony and James touch on ISDA's initial margin rules, with Phase 3 going live next year.Subscribe to Weekly Wrap emails