For 18 months we’ve been analysing the post-pandemic regulatory agenda and trying to make sense of the patchwork of new obligations.
One of the big themes we see is the need for control over the operations, conduct and culture of the organisation – something we talk about as #digitalriskcontrol and #digitalintegrity.
In advance of our November conference Richard Bain and Steve LoGalbo discuss the new compliance challenges which democratize finance has introduced to market abuse programmes and the surveillance vendors upon which they rely.
The democratization of finance has forced global regulators to shine their spotlights on market abuse. In September, Mass Mutual was fined $4m for failing to detect 250 hours of YouTube videos related to GameStop that were outside their surveillance perimeter.
Firms are rethinking the breadth and depth of their surveillance programmes and wrestling with thorny ethical and privacy issues. Technology and data vendors are not exempt and there is a big opportunity to get a trusted and integrated surveillance risk framework in place for the industry.
The expanding perimeter
As the EU, US and UK all shine their digital spotlights on market abuse regimes both man and machine are under the microscope
The pandemic had already heightened concerns about market abuse when GameStop widened the lens on how firms need to monitor conduct, culture and influencers.
In September, MassMutual was fined $4 million for failing to supervise meme-stock activity of former employee Keith Gill (aka ‘Roaring kitty’).
The SEC criticised management for failing to detect 250 hours of YouTube videos and 590 tweets concerning securities-related information activity of its broker-dealers and for failing to implement policies and procedures to detect, monitor, or prevent them from discussing securities-related information on social media
Financial institutions have built out large surveillance progammres based on third party solutions. New third party risk management rules require firms to control vendor solutions to the same standards that they hold themselves to. This puts each vendor in an unenviable position of having to answer to differing requirements from their customers. Conversely, each firm has the unenviable task of working out whether each vendor component meets its obligations.
Both firms and vendors are struggling to keep up with the pace of the game. Every day there is a new channel that is closed to outsiders. Not only do we have access challenges, new need video analytics and natural language processing capabilities are required to detect anomalies in the information.
So we have to get more data, feed it to more machines and investigate the anomalies. Sounds simple enough, but in parallel, regulators have produced thousands of pages on new AI policy and third party risk management guidelines seek to control digital behaviour.
Firms have been monitoring trading for decades but now have the technology available to go through them. In the case of high speed machines, the feedback loop will be minutes, not weeks.
What is the big problem? Simple, deploying AI for surveillance could infringe upon the data privacy rights principles of staff and clients. This means:
- Data challenges. Access rights for data which underpins the models
- Data quality of source systems and reference data
- Right policy to annonymize/ psuedonomize personal data
- Compliance with GDPR/ data privacy
And, according to proposed EU law, all of this will need to be declared, certified, tested, overseen by senior management and in high risk cases, filled with regulators.
The use of facial recognition will be a particularly sticky subject under EU law and any video involving clients could well fall into the high risk AI requirements.
There’s a big opportunity to get a trusted, joined up way of thinking about how to manage an integrated risk framework.
Questions for our Annual RegTech conference:
- How does RegTech help resolve the conflicts between culture, conduct, financial crime and privacy?
- Are securities and privacy regulators aligned on the firms’ obligations?
- How is the bar raising on surveillance programmes?
- How have leaders kept pace with the volumes and scope of data required?
- Can AI predict which employees are involved in market issues?
- How does technology help ‘zoom in’ on behaviours and find red flags?
- How can we reload the ‘LOD conversation’ with new standards?
- How do regulators expect firms to explain the standards they have set?
- How can we monitor this data while respecting privacy?
- What does good surveillance look like for 2025?
Speakers and Guests
Aired on: January 27, 2024
Aired on: September 17, 2021