Inside Reference Data speaks to Dilip Krishna, director at Deloitte, about how to best prepare risk data before aggregation, and how stress test requirements affect aggregation
Does it make sense to divide up risk data and evaluate or inspect it before aggregating it?
Risk data usually originates elsewhere in the organization, as booked trades, originated and serviced loans, etc. It is enriched in a number of ways, most pertinently by adding risk metrics to it. To ensure high levels of risk data quality, it is essential to ensure the raw input itself has high fidelity. Additionally, high
quality requires the aggregation process to be free from corruption, so both of these are necessary conditions to ensure the ultimate accuracy of risk data.
How should risk data be divided and organized to those ends?
Risk data has several components. The base input is the current actual financial state of the organization as represented by trading positions and loan balances. Risk metrics also depend on other important information such as client, facility and collateral information. In addition, to develop models for risk management, it is critical to have a sufficiently long historical record of such data (e.g., five years of loan history). Finally, external data may also be required to supplement internal historical data (e.g. operational loss history data).
Are the stress-testing requirements of CCAR and BCBS 239 driving more attention to risk data aggregation and getting more done in that regard?
Stress-testing requirements are driving significant changes in risk data aggregation infrastructures. These requirements go well beyond generating risk reports, and demand that banks perform a meaningful analysis on both inputs and outputs of stress tests. In addition, there is a timeliness requirement that is hard to meet. These requirements are usually difficult for banks to meet with existing infrastructures, prompting their focus on risk data aggregation systems. Since BCBS 239 is consistent with these requirements but states them more explicitly, both requirements are together driving more coherence in risk data aggregation infrastructures.
Waters Wavelength Podcast Episode 75: An Update on the Julia Programming Language; AI & Alternative Data; Digital Currencies
Julia Computing's Viral Shah talks about the programming language he helped create and what's ahead for it. Then James and Anthony talk about the pairing of AI & alternative data, digital currencies, and Game of Thrones.Subscribe to Weekly Wrap emails