Sponsored by: ?

This article was paid for by a contributing third party.

Solving the Data Challenge—Technical Solutions for Optimization of Risk Management, Capital and Liquidity Resources

Solving the data challenge: technical solutions for optimisation of risk management, capital and liquidity resources
Emmanuel Danzin, Opensee
Emmanuel Danzin, Opensee

For capital markets traders and risk managers, this means having an acute understanding of risk and how resources are used so business opportunities are optimized under a strong set of constraints, and comply with risk limits and overall strategy. For treasury functions, this requires having a grasp of future cash flow projections under multiple scenarios and angles, and managing liquidity risk with minimal error margin and cost. In both cases, desks need to handle significant datasets by running non-linear aggregations and, equally, drilling down to the most granular level—all of which need to be conducted at speed and with autonomy for business users.

Over the past few years, technologies have been employed to either leverage the fastest hardware, such as in-memory (RAM) uploads or graphics processing unit-based full revaluations, or to use smart shortcuts, such as pre-aggregations or machine learning to analyze historical data use and pre-index datasets accordingly. Significant challenges remain, however, to being able to quickly and consistently provide the full wealth of datasets to business users without excessive infrastructure costs.

The Problem With Non-Linearity

Essentially, banks are having to grapple with massive amounts of data because the aggregations that need to be calculated are non-linear. Marginal impacts of changes—for example, new trades or changes in projected cash flows—require complex reaggregation with the rest of a portfolio as different netting methodologies are applied.

In the capital markets space, risks and resources used are monitored through marked-to-future projections, risk-weighted assets, sensitivities and cross-valuation adjustments accounting for credit exposure, funding impact and capital use, among others, as well as an array of risk metrics. To determine the footprint of transactions and what use is made of the resources deemed scarce, ‘what-if’ simulations are employed to assess those marginal impacts through non-linear aggregations. These can be performed either before or after the transaction has been traded, as follows:

Pre-Trade Calculations
Pre-trade calculations measure the incremental impacts of new trades for traders to assess their relevance to the desk in terms of strategy and risk. Here, speed is key as traders need to be able to quickly calculate these marginal impacts on all resources to decide whether or not they should do the trade and with what economics, while minimizing error margins that are a source of expensive buffers.

Post-Trade Impacts
Post-trade impacts also need to be measured and assessed with potential mitigation actions in mind, such as portfolio reshuffling, clearing, compressions, trade restructuring and synthetic offloads. Speed is less an issue in these cases, yet the multiple scenarios that need to be considered translate into massive volumes of data.

Asset-Liability Management (ALM)/Treasury
In the ALM/treasury space, aggregations are likewise non-linear and diverse as they require compliance with a wide array of netting rules, such as accounting standards—International Financial Reporting Standards and US Generally Accepted Accounting Principles, for example—leverage ratio and tax, as well as any additional constraints that apply to global systemically important banks. These calculations—for example, of liquidity metrics such as the net stable funding ratio and the liquidity coverage ratio—need to be run on the fly for multiple cash flow projections scenarios, each with granular data to retain the full wealth of the dataset. Ideally, treasurers want to manage liquidity with precision to cover various scenarios accurately and with lower hedging costs. With a granular and rich dataset, future liquidity positions can be simulated, and then aggregated and investigated with enhanced accuracy. Access to a full array of datasets can also open up a wealth of new possibilities, for example, the application of machine learning on a historical basis to better predict future behaviors.

Speed and Autonomy

To visualize, navigate and efficiently report on the data, user agility is a necessary prerequisite. Ultimately, both speed and autonomy allow business users to better understand data and focus on the most salient data points. Big data analytics solutions such as Opensee have been designed to enable the optimization of resources with speed and user autonomy. What is new is that these operations can now be performed without compromising on volumes, meaning the full granularity of the dataset can be maintained. At the root of such a breakthrough is the new capacity to benefit from the horizontal scalability of disks, providing virtually unlimited capacity on a low-cost infrastructure, all while maintaining RAM-like speeds.

When it comes to optimizing resources, rather than attempting to optimize different sources of trade data in a piecemeal fashion—thereby running the risk of increasing the usage of a key resource when attempting to decrease another—multiple datasets can now be centralised into one, or central datasets can be further enriched by adding a secondary set. This centralization of data, for instance, around risk metrics, profit-and-loss information, balance sheet consumption and collateral usage, allows business users to aggregate, manipulate, analyze, simulate and visualise trade data with a simultaneous, comprehensive view of the impacts on all dimensions. This means the entire ‘utility function’ can be optimized, rather than just one metric at a time. Such enhanced data capacity truly represents a groundbreaking shift that ultimately allows banks to optimize their resources on a much more efficient basis.

Fine-Tuning Resources with Speed and Agility

Confronted with stringent regulatory constraints and a challenging market environment, banks are having to adjust by leveraging new technologies and solutions to allocate their resources in an optimal way, in running multidimensional scenarios on their full granular datasets. The era of running optimization scenarios on a manual and intuitive basis is coming to an end. Rather, financial institutions embracing innovative big data solutions are finally able to fine-tune their resources with speed and agility to their advantage.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here