A decision that U.S.-based Clearview AI breached the privacy of Australians by scraping their biometric information from the internet and disclosing it through a facial recognition tool highlights risks for regtech companies using web-scraped personal data in commercial software.
The decision by the the Office of the Australian Information Commissioner’s (OAIC) says emphatically that images or personal information posted online does not imply and individual’s consent for commercial use. Clearview AI’s practices were “unreasonably intrusive” and covert collection of biometric information carried “significant risk of harm to individuals”, said Angelene Falk, the Australian Information Commissioner and Privacy Commissioner.
“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” she said.
This decision follows the Spanish data protection authority’s (AEPD) one million euros fine imposed on Equifax Ibérica for processing personal data scraped from public sources unlawfully in breach of the purpose limitation, data minimization and other General Data Protection Regulation ( GDPR) requirements. In April, it ordered Equifax to stop processing and delete all the personal data that were subject to such processing.
Clearview judgement: Lessons for RegTechs
These decisions on web scraping and data privacy come at a time when financial services firms and regulators are increasingly turning to regtech tools to conduct employee surveillance, boost financial crime systems and controls and monitor employee social media activity. Some of these tools use facial recognition data bases such as Clearview AI’s and “alternative data sets”, which often are scraped from the web.
Clearview AI’s facial recognition tool includes a database of more than three billion images taken from social media platforms and other publicly available websites. Perhaps the most important part of the OAIC decision was another privacy regulator telling companies such as Clearview AI that scraping personal data from social media and the web is a privacy breach. Companies cannot imply consent from individuals.
“Consent may not be implied if an individual’s intent is ambiguous or there is reasonable doubt about the individual’s intention,” said Falk, noting that many social media platforms’ terms and conditions prohibit scraping.
Legitimate expectations of privacy are changing, Fraser Sampson, the UK’s Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner, told Regulatory Intelligence.
“Simply because I’ve chosen to post something about an event that I’ve attended or someone that I was with at that particular time doesn’t then make that available at large to anybody and everybody that may be able to access it. Consent is becoming a very difficult justification to run for a number of reasons and some of them are in that [Clearview decision],” Sampson said.
Increasingly consent comes back to who is using the data and for what purpose, Sampson said. For example, this case sets out that Australian social media users when posting online would not contemplate their images would be used in a global automated facial recognition tool.
“The OAIC determination highlights the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database,” it said.
The OAIC also rejected Clearview AI’s argument that the law enforcement use case merited an exemption to data privacy laws.
“The respondent’s database includes at least three billion images. The vast majority of those individuals have never been and will never be implicated in a crime, or identified to assist in the resolution of a serious crime. The exception does not authorise the automated mass collection of Australians’ data merely because some of this data might be useful to law enforcement at a future point in time,” the OAIC said.
The decision sets out the significant risk of harm to individuals arising from “the indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations”. The activity “may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance”, it said.
“This includes harms arising from misidentification of a person of interest by law enforcement (such as loss of rights and freedoms and reputational damage), as well as the risk of identity fraud that may flow from a data breach involving immutable biometric information,” the OAIC said.
Clearview AI’s facial recognition tool has broader applications in the private sector, the OAIC said, citing the company’s U.S. and international patent applications. Other uses include “learning more about a person the user has just met, such as through business, dating, or other relationship”, “to verify personal identification for the purpose of granting or denying access for a person, a facility, a venue, or a device” or “to accurately dispense social benefits and reduce fraud”.
These applications mark a step beyond surveillance into profiling, Sampson said.
“It’s certainly way beyond surveillance as we would understand it — standing in the street or keeping an eye on people. This is this is trawling the information about them, and then aggregating it for whatever purposes you’ve got. I don’t think that can be categorised or characterised any more as a surveillance,” he said.
Some regtech vendors allow users to create individual employee profiles for baseline behaviour which they then track for deviations from the norm.
“There is a need for some different categorization here as well, which isn’t just semantics. This is about properly describing what the appropriate use or uses for technology are one of which will be overt surveillance, one of which will be covert surveillance, and then there’ll be lots of other new activities that we haven’t really labelled yet,” Sampson said.
Clearview AI tried to argue it did not conduct business in Australia and therefore was not subject to its data laws. The OAIC refuted this argument.
“Clearview AI’s activities in Australia involve the automated and repetitious collection of sensitive biometric information from Australians on a large scale, for profit. These transactions are fundamental to their commercial enterprise,” Falk said.
The Equifax Iberica decision set out that web-scraping brings data quality questions. That decision pointed out that not all the data scraped from a public sourced could be proven to be accurate or current. That will be the case with any personal data scraped from the web and used in a tool. In this case, it was alleged that the inaccuracy led to poor decision making for lending.
Cease, desist, destroy
The OAIC ordered Clearview AI to cease collecting facial images and biometric templates from individuals in Australia, and to destroy existing images and templates collected from Australia. Clearview AI has also received cease-and-desist letters from Twitter and LinkedIn in relation to alleged scraping from their sites in violation of their terms and conditions, the OAIC decision said.
Australian law permits for the commission to order redress to be paid to individuals suffering damages or losses related to a data privacy breach but the order to destroy existing images could be worse than a fine, data privacy experts said. It is costly and amounts to companies having to destroy their own products.
Clearview AI will appeal the OAIC decision. A lawyer for Clearview in Australia, Mark Love, told Reuters the company would seek a review of the decision with the Administrative Appeals Tribunal and that the finding showed the information commissioner misunderstood its business. Clearview maintains the commissioner lacks jurisdiction over its business.
UK decision pending
The OAIC conducted its investigation jointly with the UK Information Commissioner’s Office (ICO).
“The joint investigation has finished and the ICO is considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws,” it said in a statement.
The ICO and the OAIC have been investigating their respective police forces’ use of Clearview AI technology which the company has been marketing internationally on a free-trial basis.
Further enforcement, legal action
Privacy advocacy groups have filed at least five complaints against Clearview AI with EU data privacy regulators. The Hamburg Commission for Data Protection and Freedom of Information has already taken action against the company.
In the United States, the American Civil Liberties Union is suing Clearview AI under the Illinois Biometric Information Privacy Act challenging its capture of millions of Illinois residents’ faceprints without their knowledge or consent. Clearview AI failed to get the suit dismissed, citing its First Amendment rights; the suit continues.
There have been consequences for Clearview AI’s law enforcement users too. The Finnish data protection ombudsman reprimanded the National Bureau of Investigation for its use of Clearview AI, which has now stopped. In February, the Swedish data protection authority fined the police authority 250,000 euros for unlawfully using the tool.
This article was originally published by Thomson Reuters Accelus Regulatory Intelligence on 8 November 2021
On the 29 November the UK ICO issued a provisional view to fine Clearview £17,000,000.
On 16 December the French CNIL issued an order to cease processing and delete the data within 2 months.