RegTech Intelligence

Data privacy rulings spotlight risks social media monitoring poses to firms, regulators


By Rachel Wolcott, Thomson Reuters

Rulings by the Belgian and French data privacy authorities (DPAs) emphasise the risk posed by social media monitoring and scraping technology to firms and regulators such as the UK Financial Conduct Authority (FCA) that commonly use such tools for sentiment analysis, as well as to monitor individuals’ and organisations’ online activity.

“The public nature of the personal data available on social networks does not mean that they lose the protection conferred by the GDPR,” the Belgian DPA said.

That means those scraping data from social media still must comply with the purpose limitation principle unless an exemption applies, and take other measures to safeguard individuals’ personal data. Purpose limitation in the General Data Protection Regulation ( GDPR) is a requirement that “personal data be collected for specified, explicit, and legitimate purposes, and not be processed further in a manner incompatible with those purposes”.

Social media monitoring has become an issue particularly in the wake of the 2021 GameStop trading frenzy, where a Mass Mutual employee was found to have posted 10 days’ worth of YouTube videos pushing meme stocks. Firms are keen to monitor social media to see whether employees are involved in market abuse schemes.

The UK FCA too uses social media monitoring tools for gathering intelligence about scams and consumer concerns. It also uses these tools to monitor “relevant posts” about the FCA, however, and it uses keyword searches to monitor at least one individual and an activist group which has been critical of its performance and treatment of whistleblowers.

The FCA declined to comment.

Political profiling

The Belgian and French DPAs jointly fined EU DesinfoLab, a non-governmental organisation (NGO), for breaching the GDPR when it scraped personal data from 55,000 individuals’ Twitter accounts for analysis and political profiling without applying required safeguards such as pseudonymisation. EU DesinfoLab further breached individuals’ privacy when it published raw data to defend its findings.

“It serves as a reminder that even if your intentions are right, and your activities are legitimate, and the information is available, you still need to comply with GDPR and act responsibly. Even if it’s journalism or research, you still need to consider how it could impact people and how to mitigate data privacy risks,” said Thibaut D’hulst, counsel at Van Bael & Bellis in Brussels.

EU DesinfoLab used Twitter data to study online disinformation and Russian media influence connected to a French political scandal dubbed L’Affaire Benalla. In doing so, EU DesinfoLab created political profiles of individual Twitter users that included special data categories such as religious beliefs and sexual orientation.

“All of these are categories which, under GDPR, can only be used for a limited number of specific reasons and must otherwise be made anonymous. The [DPAs argued] there’s a risk of discrimination and reputational damage, but they looked at indirect risks as well– the fact that a very large number of people were affected, the sensitivity of the data — and said [the NGO] should have considered this beforehand. Then there’s this exercise called the data protection impact assessments, which it should have done to determine the appropriate risk-mitigation measures. That would have taught it that at least it needed pseudonymisation,” D’hulst said.

Too few researchers and journalists consider data privacy regulations when conducting research, he said.

FCA use of social media monitoring

The FCA has been public about its use of “digital listening tools” to gather intelligence.

“Our digital listening tools help us collect data on everything from mortgages and investments to fraud and scams. These help us understand consumer concerns and business challenges. For example, listening to discussions on social media showed the difficulties in the insurance markets during the pandemic,” Jessica Rusu, the FCA’s chief data, information and intelligence officer, said last year.

A Freedom of Information Act (FOIA) response shared with Regulatory Intelligence revealed, however, that the FCA monitors individuals and groups who criticise it. It says it collates social media posts but does not log the information captured.

“The Communications Division uses a social media monitoring tool to identify relevant posts on social media platforms. By ‘relevant posts’ we mean posts that are related to FCA activities. We have had a tool in place for several years and it is standard practice for many communications teams in the private and public sector to use monitoring tools to collate relevant posts from the millions of conversations that take place publicly on social media. The tool can help us identify consumer harm if it is being talked about online and track online responses to FCA social media activity,” the FCA said in FOI8956.


The FCA primarily monitors Twitter. That platform’s privacy feature allows users to choose who can view their posts. Monitoring tools used by communications teams in the private and public sectors can only identify posts from users who have chosen to make their posts publicly available, said an official familiar with the FCA’s premise for using such tools.

Because Twitter users have chosen to make their tweets publicly available, the regulator does not believe there are any ethical or legal probity concerns arising from the use of social media monitoring tools to identify relevant posts that are published online, the official said.

The FCA stressed in the FOI response that it monitors using keywords and does not monitor individual accounts. It does, however, have key word searches that are individuals’ and one activist organisation’s names.

“We do not monitor individual accounts; however, we do monitor public posts by individuals or organisations that relate to FCA or that mention FCA, Financial Conduct Authority or FCA employees. On that basis, we use keywords including ‘[name of organisation]’ and ‘named person’ (which is the only individual’s name used in our keywords at this time),” the FCA said in FOI8956.

The UK GDPR defines personal data information that relates to an identified or identifiable individual, including names. It provides for a number of rights for individuals including the right to restrict processing, rights in relation to automated decision-making and profiling, and the right to object.

“We do not log any of the information captured. We do not have a policy of notifying individuals or organisations that we add to our keywords. The keywords we use are updated on a regular basis. They are reviewed and agreed by members of the Communications Division and are dependent on the current communications priorities and activities,” the FCA said in FOI8956.


This article was first published by Thomson Reuters on 21 March 2022

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy