Facial recognition company Clearview AI has been fined more than £7.5m by the UK’s privacy watchdog for collecting the facial images of people in Britain from the web and social media.
The Information Commissioner’s Office (ICO) said that globally the company illegally collected more than 20 billion images of people’s faces to create a global online database for facial recognition.
It has issued an enforcement notice ordering the company to stop obtaining and using the personal data of UK residents and to delete the data on them that it has already collected.
“Given the high number of UK internet and social media users, Clearview AI’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge,” the ICO stated.
“Although Clearview AI no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using personal data of UK residents.”
Clearview AI offers an app which customers can use to upload a photograph of someone to try and identify them by checking them against its unlawful database.
The company’s customers include numerous commercial and police organisations and its database has provoked concerns from US politicians and civil liberties organisations.
Elon Musk wants a 25% discount on his Twitter bid if 25% of Twitter’s users are spam bots – are they?
Smartwear companies fire starting gun on the race for our faces
Artificial Intelligence sleep app may mean an end to sleeping pills for insomniacs
John Edwards, the UK information commissioner, said the company “not only enables identification” of the people in its database “but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.”
The watchdog had in November 2021 announced its provisional intent to fine the company over £17m as part of a joint investigation with the Australian privacy watchdog. It is not clear why the final penalty was only £7.5m.
Sky News has contacted Clearview for a response to the fine.
The use of facial recognition technology by police has been controversial in the UK and beyond,
Fraser Sampson, the biometrics and surveillance camera commissioner, recently warned police forces against deploying the technology to identify potential witnesses and not just suspects.
Successive independent commissioners have warned that automatic facial recognition technology is even more privacy-invasive than the police collection of DNA and fingerprints.
However, unlike those forms of biometric, the government has not put facial recognition images on a similar statutory footing which would ensure limits and oversights on how State authorities can use the technology.