RegTech Intelligence


Article
Conduct tech may fail to deliver insights, while increasing data privacy risk, ethical issues

By Rachel Wolcott, Senior Editor, Regulatory Intelligence

Behavioural monitoring and conduct analytics technology promise to make it easier for firms to detect employee misconduct as well as predict where it might occur next. However, reliance on data- and technology-led solutions may fail to deliver insights and controls, while increasing firms’ exposure to data privacy risks and ethical issues.

The emergence of surveillance as a conduct risk management solution comes at a time when European data privacy regulators have issued a series of fines where companies were found to have excessively surveilled employees. Globally, regulators and lawmakers are seeking to rein in artificial intelligence’s (AI) application in surveillance and behavioural applications.

Behaviour monitoring and conduct analytics tools broadly aim to use data employees generate on the job — keystrokes, trading activity, messaging — alongside personal employee data from human resources and internal audit, staff surveys for example, to detect employee misconduct or make some conclusions about firms’ culture.

“Finance have had recording of all work calls and messages for some time, related to deal making, so these [tools] are an extension of those in terms of capturing and analysing behaviour. But [some behaviour surveillance tools] go well beyond that level into full-on ‘ The Circle’ territory, particularly if used in general business with no regulatory requirements,” said Robert Baugh, chief executive at Keepabl, a London-based data privacy as a service provider.

These tools, say advocates and systems vendors, identify emerging risks as well as meet regulatory expectations to monitor culture. Conduct risk, behaviouralists and business management experts, however, argue these tools are invasive, unproven, measure and monitor the wrong things and risk making conduct and culture problems a lot worse.

“It’s actually like handwriting analysis or Myers-Briggs [personality tests] and all the other popular but ineffective ways firms try to gauge behaviour and staff sentiment. This magic new tech is going to mysteriously allow us to control something we couldn’t control before.

It will end up in exactly the same place, which is to discover suddenly, the tools didn’t work,” said Roger Miles, a behavioural risk consultant who advises financial services firms and co-founded the Conduct and Culture Academy of the industry body UK Finance.

Second order observations

Financial supervisors report that direct observation is a way to assess firms’ culture and in a strong culture any poor drivers should be identified by front line, compliance and risk management. Few regulators, if any, call for electronic surveillance for culture measurement and assessment purposes. Those systems provide second-order observations, that is “insights” delivered by an algorithm. Premising conduct risk management on systems and data analytics ties all conduct understanding to formal surveillance, risk control metrics and staff sentiment surveys, said Miles.

“They’ve never heard of anecdotal, unstructured or external evidence; or if they have, they ignore these because they’re inconvenient to the systems’ view of the world. It is a pity because anecdotes, live behaviour observation, and external critiques like Glassdoor, Trustpilot, Corlytics andViolation Tracker provide a far clearer picture of what actually happens than the in-house systems version does,” he said.

ESG, diversity and inclusion concerns

Monitoring employees down to tracking every computer keystroke undermines companies’ purpose, values and diversity and inclusion efforts, while further compressing short-term thinking to the time it takes an employee to tap a keyboard. This close monitoring will drive away the very employees firms most want to keep, said J.S. Nelson, a visiting associate professor at Harvard Business School, who specialises in management practices, compliance and surveillance.

“We’re losing women and minorities and star performers — exactly the population you actually need. That’s an ESG argument [against surveillance] that goes to diversity and inclusion. That goes to what a company is and what it wants to be. This idea that it’s all about keystrokes and monitoring what people are doing — that’s threatening the future of the company,” she said.

Surveillance systems give firms the illusion of control, but at a time where the workforce is going through so many changes and the quit rate is at an all-time high, firms will find people will not willingly submit to these controls, says Nelson.

The data and analytics approach to conduct risk and culture assessment assumes employees should submit to behavioural monitoring as terms and conditions of employment is out of step with the current zeitgeist, said Miles.

“It may have worked for a period in the 80s and 90s, but I think it completely misreads the modern workforce. There have been staff revolts at Google where staff have walked out over government contracts. If most of the people working in the financial sector are by definition millennials, therefore, they have millennial values and are more interested in purposeful socially aware work and less interested in just taking the money,” he said.

Regulators, especially in the UK and the Netherlands, have emphasised psychological safety as a key indicator of a good culture. Widespread surveillance undermines psychological safety by ramping up micromanagement while stigmatising failure by creating an environment where mistakes are unacceptable.

“The literature on psychological safety says firms should destigmatise failure. Meaning that a sentient organisation experiments and learns by experimentation. It develops products by referencing things that work and things that don’t work. You’ve got to try in order to fail and then learn from failing. That model just doesn’t really exist in the financial sector very much where everything must succeed all the time and failure is terrible and unacceptable,” said Miles.

Data privacy, illegal monitoring

EU data privacy regulators have handed out many fines to companies for excessive employee surveillance. The Italian data privacy authority fined food delivery company Deliveroo 2.5 million euros in July for violating the privacy rights of its Italian drivers. It noted a shift booking system utilized by Deliveroo discriminated against drivers regarding the assignment of orders and work hours, citing a “lack of transparency” in the systems’ algorithms. The regulator also cited Deliveroo’s extensive surveillance of drivers “far beyond what is necessary”, including geolocation data and the storage of personal data collected during the execution of orders.

Another Italian food delivery service, Foodinho, was fined 2.6 million euros when the Italian DPA found it used discriminatory algorithms for managing its delivery riders. Both cases have important lessons for technology businesses and show some of the conflicts between AI and the General Data Protection Regulation, said Jimmy Orucevic, a privacy professional in KPMG’s cyber information management and compliance practice in Zurich.

A recent paper published by AFME and PwC, titled ” Conduct Analytics – Insights-led and Data-driven: The Future of Conduct Risk Management” acknowledged that “data privacy laws can impede the collection of all the desired data points”.

“Banks should strengthen efforts to measure and monitor culture. The regulatory focus on individual accountability continues, but regulators are also moving towards examining corporate culture and how this sets the tone for individual behaviours and enables the right client outcomes. This will reduce the focus on additional policing of individuals, which is becoming increasingly difficult due to data privacy and local labour laws,” it said.

The paper suggested firms should work with AFME “to lobby and provide suggestions to regulators e.g. for data privacy and remote and hybrid working”.

Regulators highlight privacy and ethical concerns

The Bank of England and Financial Conduct Authority (FCA) Artificial Intelligence Public-Private Forum’s most recent minutes noted that third-party artificial intelligence models could raise novel and challenging ethical issues when applied to monitor employee behaviour.

“There are third-party solutions that claim to detect and classify emotions based on voice data. If firms were to take decisions based on input from these models, the transparency would need to be very exact. Firms cannot simply act on the outputs saying that ‘this is what the model says’. Members noted that emotion-detecting models pose very difficult ethical questions as well as not being able to be monitored and evaluated. Another such example is use of AI models in HR to assess and classify candidates/employees based on behavioural data,” said the October 1 minutes.

The European Union’s draft Artificial Intelligence (AI) Act will require all companies to comply with high-risk requirements when using AI systems for recruitment purposes, including application screening and for making decisions on promotion or termination of work-related contractual relationships, task allocating and monitoring, and evaluating and behaviour.

Poor data, poor algorithms

Data-led, AI-powered technology’s effectiveness in predicting behaviour or managing HR tasks is patchy and there have been many high-profile failures. In 2018, Amazon scrapped an AI recruiting tool that was biased against women. Also in 2018 an AI tool that claimed to be able to predict a criminal defendant’s risk of committing another crime was proven to be no better than random guessing.

Experts put failures down to poor data sets, badly designed algorithms, not checking for biases and overpromising on what systems can actually do. Bloomberg’s Parmy Olson writes that for big tech companies AI’s flaws are getting harder to ignore.

“It’s fine for AI to occasionally mess up in low-stakes scenarios like movie recommendations or unlocking a smartphone with your face. But in areas like healthcare and social media content, it still needs more training and better data. Rather than try and make AI work today, businesses need to lay the groundwork with data and people to make it work in the (hopefully) not-too-distant future,” she said.

 

This article was originally published by Thomson Reuters Accelus Regulatory Intelligence on 28 October, 2021

 

If you’re interested in learning more about the future of RegTech or wish to get involved in the discussions, then join us at the 6th JWG conference, virtual and on-demand from the 16th to the 17th of November 2021.

Register Here

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy

Latest
Unwrapping DORA

December 10, 2024 - In: Analysis

Bridging DORA Gaps 2025

November 25, 2024 - In: Analysis

Supplier countdown DORA: T-40

November 25, 2024 - In: Analysis

DeFi RegTech Opportunities: 2025

October 25, 2024 - In: Analysis

Scaling OpRes Mountain: The New Risk Frontier

October 22, 2024 - In: Analysis

Navigating OpRes Storms in 2025

October 9, 2024 - In: Analysis

Navigating OpRes with RegTech

October 6, 2024 - In: Analysis