RegTech Intelligence


Article
2014’s KYC implementation: How long before new data collection becomes unfeasible?

It’s common knowledge that KYC requirements are becoming a major problem for banks, many of whom have thousands of employees conducting due diligence, document collection or data entry. Unfortunately, the regulatory trend towards more and enhanced customer data has not abated. JWG research has shown that 20+ regulations to be implemented over the next 3 years will require an additional 300 entity data fields in order to be compliant. That’s just between Europe and the US.

Regulations across the risk, trading and financial crime landscape has meant that over 150 of these will be required in 2014 to meet requirements such as client classification, risk and trade reporting or due diligence.  While half of these regulations will require the use of the Legal Entity Identifier (LEI), worryingly a quarter of the 300+ data points require direct client interaction in order to source.

Not only is the sheer number of data points a problem. Each regulatory initiative seems todefine new requirements for what may be ambiguous concepts, i.e., those that have slightly different meanings in each segment of the market. Over 20% of the 300+ data points are not easy to define and will require research in order to collect. Numerous regulatory definitions of counterparty classification according to status, geography, business practices or sector require significant investment in understanding the underlying data points that can yield and accurate classification for firms. Not only is the industry challenged to make sense of multiple efforts, which are often implemented in parallel, but the change levels are very high.

The stated and implied requirements for managing customer identities (e.g., LEIs/pre LEIs, GIINs, TINs, NACE, NAIC, SIC, etc.) and associated regulatory information (e.g., Financial counterparty, non-financial counterparty, professional, etc.) create fundamentally new, complex, interlinked and difficult demanding new hierarchies, linkages, identifiers and definitions.

Taken in isolation, complying with each new rule that requires entity data is not an impossible task. New standards, processes, systems and training will have to be put in place, all of which can be achieved with time and money. However, if a piecemeal, regulation-by-regulation approach is taken to managing client data, firms will be forced to collect a vast array of information manually (i.e. directly from the customer) on a case-by-case basis (e.g. ‘I need this information for AML, that information for FATCA and this information for MiFID’) all of which will lead to an increase in costs, suboptimal solutions and annoyance for the customer.

Many are hoping that new vendor software and utilities will be able to provide the solutions. However, the potential that new technology and systems can provide is not always possible without huge budgets and extended implementation timelines. The complexity of legacy systems across disparate silos means building an integrated view of KYC data across a global bank is a long and arduous task. The sheer scale of the new requirements means that it is increasingly difficult to align work streams across these regulations, forcing a continual, iterative approach to KYC compliance. Without a clear idea of which requirement affects what system, process or data set, isolating problem areas and aligning internal objectives becomes that much harder.

With deadlines for FATCA, EMIR and CRD IV in Q1 2014, the timeline to solve these problems is shortening at a fast pace. This is a global problem, and given the depth and breadth of the challenges involved, this is going to be an unprecedented exercise in standard setting.

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy