How would you feel if you knew the vacation selfies you posted on Facebook and Instagram were scraped by a company and used without your permission in a “search engine for faces” sold to governments around the world? Not very happy, right?
That’s what the UK Information Commissioner’s Office (ICO) felt when it decided to fine Clearview AI £7.5 million ($9.4 million) for breaking UK data protection laws. It has also ordered Clearview to stop scraping and using the personal data of UK residents and to delete their data from its systems. The fine was a substantial reduction from the initially proposed £17 million, but that doesn’t mean the circumstances have changed much since the beginning of the UK and Australian joint investigation.
“The company not only enables identification of those people but effectively monitors their behavior and offers it as a commercial service. That is unacceptable,” said UK Information Commissioner John Edwards. “People expect that their personal information will be respected, regardless of where in the world their data is being used.”
Similar problems in other countries
The UK fine follows similar penalties against Clearview imposed by other countries. In February 2021, the company exited Canada after authorities recommended Clearview AI stop offering its facial recognition services to Canadian clients and stop collecting images of people in Canada. In December 2021, the French privacy watchdog ordered Clearview to delete French citizens’ data and, in March 2022, Italian authorities fined Clearview almost $22 million for the same reasons. In 2020 a lawsuit was filed against the company in Illinois state court ago by the American Civil Liberties Union and several other nonprofits. As a result, in May 2022 the company agreed to restrict its services in the US to law enforcement organizations.
Each time, the company contested the sanctions. It insisted that it’s doing legitimate business, follows all rules and regulations, has an ethical process of selecting clients, and plays a crucial role in helping law enforcement.
Hoan Ton-That, founder and chief executive of Clearview AI, has also proclaimed his astonishment regarding the matter:
“I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and intentions,” he said. “We collect only public data from the open internet and comply with all standards of privacy and law. I am disheartened by the misinterpretation of Clearview AI’s technology to society.”
How it works
According to the ICO, Clearview has collected more than 20 billion facial images of people from all over the world after scraping them from social media, websites and other public sources. Its systems have been used by government organizations from around the world, including the Metropolitan Police, the Ministry of Defense, and the National Crime Agency. It works much like a “people registry” or “search engine for faces”. A user uploads a photo of a person and finds matches in a database of billions of images. The system then provides links to where matching images appear online.
The obvious privacy issue here is that the photos were collected without the user’s permission and that the system can be easily abused for stalking, tracking, intimidation and surveillance despite the company’s best assurances that it will never happen. At the same time, although it faces backlash in many countries, Clearview AI is based in the US and owns no assets in Europe. Consequently, it can stop offering its services in those countries but it’s hard to believe it can be forced to delete photos of European citizens. Those pictures are still accessible outside Europe and, until another solution is found, can be used by virtually anyone.