What lessons have been learned from the NASDAQ outage and will the SEC’s latest response stop it from happening again?
Following the NASDAQ outage in August, the SEC announced that Chair White had “stressed the need for all market participants to work collaboratively – together and with the Commission – to strengthen critical market infrastructure and improve its resilience when technology falls short.” This is exactly the kind of proactive approach put forward by infrastructure professionals in the IT industry for some time. But does it go far enough?
The problem runs much deeper than just the recent high-profile failings. Back in the 1980s, the NASDAQ began to link buyers and sellers electronically and was the first stock market to incorporate the use of computers for automatic trading purposes. It was even the first stock market in the US to start trading online. So the situation we find ourselves in today is that we have inherited the same basic architectural framework of the same system that was alive and kicking in the 1980’s.
- What will be the next technology fault to hit market infrastructure?
- What steps will the SEC take once the results of its new exercise are in?
- Will we see a major industry-wide systems overhaul within the next five years?
Much of the same is true of many large financial institutions’ IT systems. Over the years, technicians have simply added to the ‘pool of infrastructure’ as the system was too critical to be taken offline for a lengthy amount of time or to be replaced entirely. This has been identified as an issue by international regulators, such as the FSB and BCBS, but a unified global response has yet to materialise.
Traders have complained that stock exchanges in the US have failed to keep up with the demands of ever-faster trading. In actuality, the plumbing and connectivity of the different markets may have been flawed for years, but now it is apparent that these issues are rooted in poor design.
The SEC has specified a number of measures for the industry to take heed on when going forward from here. But the only one of these with any measurable effect on critical infrastructure will be forcing exchanges to “Provide comprehensive action plans that address the standards necessary to establish highly resilient and robust systems for the securities information processors (SIPs), including testing standards and disclosure protocols.”
While a good initial step addressing the very specific problem that gave rise to the outage (the SIP info exchange), the SEC has failed to take action on anything more critical, as these systems harbour many other weak points. What makes matters worse is that many large and complex systems are now completely reliant on one another; if one goes down, it will have a knock-on effect right across the world.
To counter this, the SEC has asked market participants to “Identify and provide assessments of the robustness and resilience of other critical infrastructure systems”. But who do they define to be market participants? For example do high frequency traders, who have the ability to completely bypass the SIP (as it slows down their trading times), have to follow the same letter of the law and internally audit their IT systems, probing for weak points as they will have to under MiFID II in the EU?
This becomes more of an issue on the SEC’s next bullet: “Provide SIP plan and/or rule amendments addressing the issuance, effectiveness, and communication of regulatory halts.” So, again, what will happen to HFT now? Are they immune to these changes?
This is, again, a start but not wide-ranging enough considering the importance of these systems. In fact, the lessons from the Knight Capital incident and the Flash Crash have not been heeded. High frequency algorithms were the sole cause of both of these market failures in 2010 and 2012. Yet the SEC has completely neglected to pay attention to these pressing issues of HFT as they were not the cause of failure this time round.
As a result, we are forced to question whether the scope of the SEC’s conclusions is really wide enough to encompass the many different elements that make up a trading system, whose complexity is then multiplied exponentially by the number of other systems they have to communicate with? Or is this just a token gesture aimed at restoring confidence in the stock exchanges?
Either way, it is clear that the complexity of these trading systems has now exceeded the limits of human management, even with computerised tasks to help and aid them. Something needs to change, but what?
Change will not be easy, cheap or pain-free. But a system redesign and overhaul is needed within the trading industry. Closer integration is needed within the industry. This not only includes the stock exchanges, but also the myriad of people using the platform. Is it really worth developing an antiquated system even further? Just image if companies or households retained their computers from the 1980’s by sticking new upgrades to them?
The industry needs to start taking a more proactive and a less reactive approach to these system failures. As more and more people are now relying on the exchanges, and with the speed of trading already at a pace that these systems were not designed to handle, these outages will only become more frequent and more severe.
[accordion]
[pane title=”Themes”]
- Many exchange systems are still based on obsolete technology
- Regulations like MiFID II are beginning to look more closely at how market participants design their systems
- HFT regulation needs to be considered alongside exchange regulation
[/pane]
[/accordion]