JWG analysis.
MiFID I was all about creating a common set of rules for the single market. Along the way, it asked regulators to track market abuse. They duly set up a system of transaction reporting that all feeds into Paris – 70% of it via the UK. Seven years and millions in fines for poor reporting practices later, the system sort of works for spotting bad behaviour.
Then, as regular readers of this site are acutely aware, along came EMIR’s trade reporting. This regime has a fundamentally different purpose – to spot the build-up of risk. There have been a few issues noted with its introduction and it is rumoured that 97% of the reports fail to reconcile. And Europe’s not the only one in trouble – the US has also been quite vocal about the challenges and the FSB has consulted with the industry about what it can do globally to fulfil the lofty expectations that regulators should be able to prevent a crisis because they have a mountain of data.
As banking returns are driven down by balkanisation, banks subsidiarise and we see a shift towards shadow banking, perhaps it’s time to take the MiFID II consultation seriously. Chapter 8 of the ESMA discussion paper (pages 438 – 496) is all about control of a massively complex system. You will note, as you answer questions 546-577, that the discussion paper asks for a 300% increase in the size of the transaction reports replete with trading flags, counterparty identifiers, algo identifiers and a host of other information about how the trade fits in context. The challenge is that, to understand the context, one must be able to model the market and the many different ways that trading is done … for a variety of different purposes. To get this right, the Transaction Reporting User Pack (TRUP) would need to be thousands of pages thick. Stay tuned for more coverage in this space shortly.
Perhaps the question we should be asking is: does technology = transparency? Just because we have the capability to shove exabytes of information at the regulators does it mean we will be any safer?
In 2010, the UK’s Technology Strategy Board funded a paper on achieving supervisory control of systemic risk. It’s an interesting read, though we doubt many have read it. One of the insights in the report comes from Malcolm Gladwell,[1] who distinguishes between a ‘puzzle’ (complexity) and a ‘mystery’ (uncertainty or ambiguity), and the need for differences in approaches to solving them.[2]
Evidence of this concept was provided in July 2010 when, in response to the Financial Crisis Inquiry Commission’s (FCIC) request for information on their mortgage derivatives business, Goldman Sachs provided 2.5 billion pages worth of documentation.[3] If printed, over a thousand lorries would have been required to transport them. Naturally, the FCIC charged them with wilfully obstructing their work. That their work was obstructed is definitely true, but it also means that anything the FCIC actually reads about Goldman Sachs’ mortgage derivatives business will only be the summary … of a summary … of a summary … of those 2.5 billion pages.
With the quantity of data soaring and the complexity of computing systems to deal with it intensifying, the view of how it works becomes more opaque and it becomes increasingly difficult for anyone to have an end-to-end view. This creates knowledge gaps between those who create the data infrastructure, those who build the computing systems, those who use those systems and those who make decisions based on the above.
As the industry has grown in size and complexity, the volume and types of risk within it have also multiplied. When attempting to maintain financial stability within the system, it is important to recognise the appearance of new forms of risk as endogenous. The more information that becomes available, the greater the ‘processing’ challenge to understand it. Analysing all of the data, all of the time, is impracticable, therefore one has to treat it as a ‘mystery’ (per Gladwell). To extract meaning, the right questions have to be asked in the right way. In more concrete terms, a supervisor has to be mindful, in its collection of data, that it knows its objectives, understands what information is required to meet them and ensures that it has the right standards in place to able to correctly analyse the information it collects.
Reliance on any single model to provide ‘an answer’ is dangerous but, compounded with poor or incorrect data, the risk of creating ‘garbage in, gospel out’ escalates. When so much of today’s information is computer generated, it becomes increasingly difficult for those who use the information to understand exactly where it comes from and what processes have been applied to ensure its accuracy and quality.[4]
The problem this creates can be termed the ‘illusion of control’[5]: the “perception that control is effective whether or not that is really the case.”[6] Because the numbers are there, and there are complex models behind them, people believe them. If 2008 can be remembered as “the year stress-testing failed,” as Haldane asserts, there is a real risk that the next crisis will be remembered as the year the systemic risk models failed.
The point is this: without the right MiFID II standards on instrument and transaction data quality in place, there is a danger that any conclusion reached as a result of the analysis of this data will be fundamentally and unobservably flawed. The ‘control’ achieved by using the results of the model will be an illusion. The adage that “all models are wrong, but some are useful” [7] is apposite, as the stakes in relying on a faulty or misunderstood ‘riskometer’[8] are enormous.
A new platform is required, and a joined up approach across systemically important firms, supervisors and their supply chain will avoid a regime of chaos that could fail to spot the next crisis. In 2010 they said: “will it”? It would appear, in 2014, that question, albeit in the form of 21 new ones from Paris, is still very much alive and kicking.
_____________________________________________________________________
[1] Gladwell, Malcolm, Open Secrets: Enron, intelligence and the perils of too much information, (2007), The New Yorker: http://www.newyorker.com/reporting/2007/01/08/070108fa_fact
[2] Gladwell describes the difference thus: “Osama bin Laden’s whereabouts are a puzzle. We can’t find him because we don’t have enough information. The key to the puzzle will probably come from someone close to bin Laden, and until we can find that source bin Laden will remain at large… The problem of what would happen in Iraq after the toppling of Saddam Hussein was, by contrast, a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgements and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much.“
[3] Foley, Stephen, Grovelling Goldman says sorry for documents foul-up, (2010), The Independent: http://www.independent.co.uk/news/business/news/grovelling-goldman-says-sorry-for-documents-foulup-2015109.html
[4] We are increasingly facing the same situations presented to Charles Babbage, the ‘father of the computer’ when he created his difference engine: “On two occasions I have been asked: “Pray, Mr Babbage, if you put into the machine wrong Figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
[5] Michael Power (2007), Organized Uncertainty: Designing a world of risk management, OUP
[6] Richard Anderson et al, Thinking beyond Turnbull: A guidance paper, forthcoming
[7] George Box, (1980), Sampling and Bayes’ Inference in scientific modelling and robustness, Journal of the Royal Statistical Society of Australia, 143, Part 4.
[8] Jon Dannielsson (2009), The Myth of the Riskometer, Vox. http://www.voxeu.org/index.php?q=node/2753