Risk regulation is a cluster bomb – multiple devices with multiple impacts – but applying uniform risk data principles can save costs in 2013-16
With six months before the 4th Capital Requirements Directive comes into force, many will be asking what technological improvements will be necessary to efficiently manage risk going forward. Before they embark on a costly overhaul of their data systems, firms should look at what regulatory trends are likely to require similar changes in the future and adjust their specification accordingly.
CRD IV, coming into force 1 January 2014, will compel firms to hold capital against their counterparty exposures. This brings with it many data issues relating to risk and counterparty management. For instance, firms will have to calculate CVA risk for all OTCD trades, except those with non-financial counterparties. However, under EMIR, if NFCs trade a certain volume of derivatives, then they are no longer NFC. Therefore firms’ reference data systems will have to know how much counterparties are trading or risk holding too much capital or, worse, being noncompliant. And this is just one requirement of many.
[accordion][pane title=”Known Unknowns”]
- Will firms be ready to calculate CVA from 1st January 2014?
- With renewed focus on consumer protection, how and when will regulators penalise poor conduct?
- What will the total cost of systems improvements be for the sell-side by 2016?
When thinking about what it is that a risk data system needs to look like, firms should have regard to the Basel Committee for Banking Supervision’s Principles for Effective Risk Data Aggregation and Risk Reporting. This document sets out comprehensive, but high-level, principles for what a ‘good’ risk data framework looks like in terms of governance, policies and procedures and more technical architecture and infrastructure. These principles are not the answer to every question, but they do provide common ground on what is expected of a data system in the risk management arena.
For instance, conduct risk is currently a hot topic and, though there is no consolidated guidance, a lot has been issued in this area. In the UK, the FSA has made its position clear and the FCA is keen to enforce. Though conduct risk has not traditionally been treated as a true type of risk (like credit risk, for example), when it comes to the back-office, many of the same principles apply. Ultimately, the aim of the game is to ensure that management get accurate and timely information to enable them to make effective decisions about the dangers facing the bank and its customers.
All of these requirements have a subtext for those who manage the collection, transmission and aggregation of data within a firm. One example is that, as stated in the FSA’s Retail Product Development and Governance paper, senior management are expected to compare products across the firm for excessive profitability (an indication of exploitative practices). This adds another item to a growing list of risk measures that have to be fed up to senior management through the MI chain.
But where – as is common – products are separated by different business lines, departments and geographies, automating this process becomes a challenge for the firm’s data architecture. This is where it will be expedient for firms to have already implemented the BCBS Principles, which require them to align data across the business through infrastructure such as a consolidated data dictionary and robust end-user controls.
At the core of the issue is the daily management of risk within the firm. For example, funds/liquidity transfer pricing remains an area of concern for regulators. Regulators’ research showed that some banks’ pre-crisis risk tracking suffered from decentralised funding structures, manually adjusted data and a lack of oversight by risk and finance professionals. This was creating a situation in which banks were unable to track their funds transfer, and liquidity was being supplied to business units free of charge.
To solve these problems, the BCBS recognises the importance of a bank’s Liquidity Management Information System. In fact, applying the BCBS Principles to a bank’s LMIS, the recommendations include single authoritative data sources per risk type, procedures to manage down numbers of manual adjustments and internal review by qualified staff. Thus the principles conveniently map onto the BCBS’ FTP concerns and so, as has been said, provide a ready-made set of common denominators that can be used to line up their liquidity profile limits and funding matrix with other risk measures throughout the bank.
Therefore, to answer the question posed in this article’s title, the answer is a ‘yes’, in relative terms at least: Compliance can be achieved more efficiently by looking across multiple regulations that impact the bank’s risk data and lining-up requirements. However, rather than just getting the governance right and hoping the rest will follow, this requires a bottom-up engagement with granular data issues.
- There is a great deal of regulation in the risk space: Basel III, CRD IV/CRR, Dodd-Frank, COREP/Finrep, BCBS guidance, Enhanced Disclosure Task Force
- Conduct risk is a hot topic for regulators and the page count is rising fast
- Risk will have to work together with treasury, accounting, corporate and the back-office to achieve implementation
[pane title=”Top Alerts”]
- CRD IV published in EU journal; Directive and Regulation will come into force from 1 January 2014
- Interest rate risk: Pre-CRD consultation paper says bank IT systems must measure exposure on a single transaction basis
- The price of bad conduct risk data: FCA fines J. P. Morgan ￡3.1 million for failing to keep up-to-date information on client risk profiles