RegTech Intelligence


Article
Semantics: the key to finance’s food chain

The regulators that oversee the economy are drowning in oceans of data, but need better standards to make sense of it all.

The struggle stemming from the lack of standardised data was clearly visible in 2012 when the Commodity Futures Trading Commission (CFTC) needed to trace the so-called ‘London whale’, a trader who accumulated Credit Default Swaps positions in the market worth over $6 billion.  Despite reporting parties sending their swaps to a Swap Data Repository, the CFTC had a hard time using any of it effectively.  As the data format was not specified in the requirements, each party had disclosed the data in a way that they found convenient for them.

The data reporting situation has not changed massively since then.  Steven Maijoor, ESMA’s chair, noted in January that, last year alone, ESMA received 44 billion derivatives reports.  Yet serious questions have been raised about the quality of what is in them and how much use the regulators are able to make of it.  So where are we in understanding the food chain of finance?

Entity identification standards

Data standards, like fresh and salt-water fish, can be divided into a number of different species.  The first and one of the most significant types is the entity identification standard.  The concept itself is not new.  Various agencies have used entity identification standards in order to recognise the entities that register with them.  However, each standard encompasses only a specific and narrow group of entities or individuals, such as Central Index Key (CIK), which is used to keep track of issuers, funds and some shareholders who have registered with the Securities Exchange Commission (SEC).  Additionally, various participants create their own standards, which could spread beyond one organisation depending on their application.

The Legal Entity Identifier (LEI), a 20-character alphanumeric code that allows unique identification of global entities and defines the robust open governance, was embraced by the Financial Stability Board (FSB) in the aftermath of the 2008 crisis.  Financial institutions can have complex links of ownership and hierarchy of subsidiaries.  The methods of identification of these elements of the business were diverse and incomplete, which made it impossible for the regulators to assess credit exposure of the troubled institutions.  The FSB called for the global implementation of the LEI to introduce transparency in the interdependencies between market actors.  For further information see our article on LEI.  In 2014, CPMI and IOSCO established the Harmonisation Group, a working group mandated to develop guidance on key OTC derivatives data elements.  The Group seeks to standardise and harmonise important data, particularly through the global use of the LEI, which would allow for an aggregation of reported information.  Its work includes producing guidance on the definition, format and use of the key data, including Unique Transaction Identifiers (UTI) and Unique Product Identifier (UPI).  The Harmonisation Group has now published a consultation on the second batch of these data elements.

Products identification standards

The second significant variety of data standards refers to products and it allows both identification and description of a given product.  One of the most common product standards is the International Securities Identification Number (ISIN), but not all types of the instrument have definite standards.  Repos are one of the major vehicles for large financial institutions, and their stability is crucial for market wellbeing.  According to the Office of Financial Research, repos do not have any defined data standard.  Additionally, the continuously evolving instrument is what drives the market.  The instruments, unlike the entities, are not static and data for such instruments needs to undergo constant updates and verifications to remain relevant.  Other types, such as bilateral swaps, could also be highly customisable depending on the needs.

Another particularly troublesome category is derivatives, as their value is linked to the performance of other instruments.  Products, such as asset-backed securities (ABS) or collateralised debt obligations (CDOs), are complex and there is no existing standardised way of identifying and linking their content to their overarching ABS or CDO.

Recently, the Association of National Numbering Agencies (ANNA), the organisation responsible for ISIN, launched the Derivatives Service Bureau, a fully automated generator for ISINs for OTC derivatives.  For the first time, OTC derivatives require almost real-time allocation of ISINs upon a user’s application in order to comply with the regulatory burden of MiFID II, MiFIR and MAR.  As a numbering agency, it will be the first that was designed to operate on a global basis.  Despite it being a response to market needs, there remains concern about other types of products, which have similar requirements but are not within the scope of the DSB’s work.

Semantic standards

As we note frequently on these pages, one of the problems most highlighted in the financial sector is the absence of a common language for reporting.  Today, each regulatory initiative has its own requirements which interpret the political and organisational imperatives of the industry in slightly different ways, and with different terms and meanings.  Different semantics create an unnecessary duplication and add to the confusion over data requirements.  Not only will fields that look the same be reported differently for different purposes but, for the same regime, information from individual institutions will differ to such an extent as to render the reports irreconcilable with each other.

https://regtechfs.com/top-5-early-regtech-takeaways-from-the-continental-debate/

If the instruments, entities, transactions and other species in our ecosystem held the same definition of concepts, individual systems could easily aggregate the data from a variety of sources.  Today the data standards in existence rely on different terms and definitions of the same concepts.  This is especially important when it comes to some of the more complex products, such as OTC derivatives, where the definitions and structures are not as clearly defined as the more mundane instruments.  This prohibits both the regulators and the financial firms from aggregation and analysis of data in a meaningful manner.  The results are not reliable as it is impossible to tell if the discrepancies come from the market situation or the data.  Standardisation of concepts could be extended further than just financial instruments to terms of contracts to capture more information to be included in risk analyses.

The issue of overcrowding of standards and lack of common language between them has been recognised in the environment of messaging standards.  The primary users of messaging standards are treasuries, who each developed their own proprietary system to communicate with their banks.  This, in turn, created the flood of individual standards that overlap in function and purpose.  It has finally been recognised that more is not necessarily better.  Recently, a greater focus has been given to ISO 20022.  Even though it is not yet the global standard for messaging, new standards, such as Single Euro Payments Area (SEPA), are based on the ISO, making it easier to communicate between systems.  One lesson that reference data folk can take from messaging standards is the need to create a foundation that can sustain all the diverse activities that financial institutions engage in.

The most significant of recent developments was by the European System of Central Banks’ (ESCB) Statistics Committee – Banks’ Integrated Reporting Dictionary (BIRD).  The ever-increasing reporting requirements require market participants to adapt their internal systems to accommodate the change.  Unfortunately, even though each institution exercises its best efforts, differences between how each of them sees particular requirements occur.  ESCB’s BIRD aims to help balance this misalignment.  Its website contains a data model that describes, in precise terms, the data that should be extracted from institutions’ systems and used to create the necessary reports.  From a single input, the system can generate different reports and, thus, decreases the time and effort involved in analysing the newly created report.  The system is ‘a public good’ and everyone can access it.

Change in the mindset

It must be noted that the introduction of new data standards (or expansion of the existing system) is not in itself a panacea.  In fact, it is a conversation that has been ongoing for decades.  We have seen a myriad of data standards, some more comprehensive than others.

The biggest obstacle is not in finding that ‘silver bullet technology’ to solve all financial industry headaches; the challenge stems from changing the mindset of the industry to allow it to see “a world of possibilities in linking and accessing of widely-dispersed data across the web.”  (Dr. James Handler, one of RDF/OWL’s creators).  Standardisation of data is necessary … but is not the only step in the right direction when tackling the data issues faced by the industry.

The oceans of regulatory data are vast and contain diverse ecosystems.  That said, coming up with a taxonomy to understand how all the fish are related should not be beyond the wit of man.  If we want to know what kind of financial product is on our dinner table, or when the next storm will disrupt trading routes, we need to get serious about the semantics.

Why now?  Well, this year, we need to start moving up a very big RegTech mountain.  The kinds of standards we are talking about have already been introduced in other sectors.  In pharmaceutical and automotive industries, for example, standards are applied and everyone uses the same data protocols so that the data from anybody’s report can be reused.

Ergo, the approach we are recommending is not that scary.  In fact, it’s the one that our customers have already followed.  Are they as complicated, global and fast-moving as finance?  Probably not.  Is RegTech now ready to get us the tools we need for the ascent?  Definitely.  More importantly, if we do it right we can be safer, more efficient and carry less risk.  What’s not to like about the business case for RegTech standards this time around?

https://regtechfs.com/preparing-for-the-regulatory-flood-standards-getting-us-to-high-ground/

JWG will be exploring these issues at our 300+ person RegTech Capital Markets Conference, which Business Insider noted as one of the world’s top RegTech conferences to attend in 2017.  The afternoon keynote on “Standardising demands for regulatory data” will cover all the issues discussed in this article in greater depth.  Please sign up here if you’re interested in attending next week, or if you’d like to join one of our new RegTech special interest groups that start next month.

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy