RegTech Intelligence


Article
What lies beneath?

JWG analysis.

The Legal Entity Identifier is a top level form of identification, designed so that it can be applied and recognised universally.  Essentially, it should provide one unique code for each unique entity that holds data and be able to be applied anywhere in the world.  It sounds great doesn’t it?  It sounds like a solution to one of the great standards issues in financial services regulation, however, what lies beneath is a system that is highly fragmented.

The LEI is a unique 20-digit code that identifies a legally distinct entity involved in a financial transaction/relationship.  Such a system should, in theory, allow regulators and risk managers to efficiently and effectively identify parties involved in transactions, enabling them to analyse and evaluate systemic risks as well as more granular risks at an individual firm level.  This could be a big step towards firms and regulators being able to spot the next crisis before it happens.

In addition to this, the implementation and use of the LEI should provide obvious gains for firms and regulators alike in terms of efficiency.  It would remove the continual duplication of work and eliminate problems with reconciling codes and any attached data about an entity.

Recent estimates state that over 360,000 entities have been assigned an LEI, 25,000 of which are newly assigned.  The same figures also reveal that a potential 20% of entities with registered LEIs have now lapsed.  In addition, as a result of an increase in LEI registrants, data quality issues are starting to crop up and utility issues have also increased.  Ultimately, the number of entities without an LEI still dwarfs the number of those assigned one.

Problems have arisen from trying to implement a source of universality … in an industry that lacks universality.  Such an issue is easily capsulated by the use of language, for example, western IT systems not being able to process Japanese characters.  Such flaws could prove critical when implementing a system designed to be global.

The fact that the US is pushing for designations of GIIN codes also doesn’t help the LEI case.  GIINs are distributed by the US Inland Revenue Service (IRS).  When the IRS was drawing up FATCA, they had to make a decision on the information available to them at the time.  The use of the LEI was still in its early stages and, consequently, the IRS went with the use of GIIN.  The IRS has stated the potential for migration – or possibly complete replacement – but, as yet, there is no sign of this happening.

There has been criticism of regulators for not mandating the use of LEIs – the IRS is a case-in-point.  And now some regulators are beginning to change tack.  For instance, the Prudential Regulatory Authority requested all entities that fall within the scope of Solvency II to request an LEI code by 30 June 2015, and all other insurers to obtain an LEI code by 30 June 2016.  ESMA are expected to mandate the LEI under the MiFID II Regulatory Technical Standards, which they are due to publish in September.  Whilst these are all positive moves, there remains a long way to go.  Currently, registering for an LEI is a relatively expensive process now leading to as many rumblings of discontent about the mandating as there were previously about the lack of mandating.

In addition, developments for creating and initiating Level 2 LEI data have only just started.  Considering the complications arising from Level 1 data, it should be anticipated that potential issues will escalate.  Level 2 data maps the ultimate parents of legal entities, but questions remain unanswered as to how this will be developed, with key concerns focusing on:

  • Who should report – parent, child or both?
  • What will be the definition of ultimate parent?
  • How granular should the data be?
  • To what extent will public authorities be direct contributors to obtaining Level 2 data?

Potentially, information could be supplied by more than one source, introducing complexity, inefficiency and confusion.  Moreover, the definition of ultimate parent needs to be settled, otherwise the data will be deemed close to useless if it bears different thresholds and definitions.  And that defeats the whole purpose of universality.

Whilst it is always good to plan in advance, it is clear that the Level 1 data issues need to be resolved before the Level 2 process is pushed forward.  Finding solutions could inform the development plan for implementing Level 2 data and, at the same time, greatly improve the application of Level 1 data.  Such a universal system has its benefits, but implementing it in a system that is a far cry from being universally the same will take careful planning.

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy