Australia and the UK have opened a joint investigation into Clearview AI. Specifically, the regulatory bodies are concerned with Clearview’s practice of using “scraped” data and biometrics.
The two countries aren’t the first to question Clearview AI, the company behind the controversial facial recognition program. Clearview AI claims to have a database with three billion images gathered from the open web. It offers that database to law enforcement, supposedly so they can identify criminals and victims. But the practice raises some obvious privacy concerns.
Twitter, Google and YouTube have all sent Clearview AI cease-and-desist letters, alleging that Clearview violates their terms of service. Facebook and Venmo also demanded Clearview stop scraping their data. The ACLU rejected Clearview’s claim that its tech is “100% accurate,” and it recently sued the company for allegedly violating an Illinois state law.
Despite these concerns, thousands of public law enforcement agencies and private companies work with Clearview. A data breach earlier this year exposed the company’s full client list, which includes Best Buy, Macy’s, the Department of Justice and a number of foreign states, like the UAE. That hack also raised concerns about how secure Clearview’s database really is.
The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) will conduct the investigation. Until it’s complete, OAIC and ICO aren’t saying much — just that the investigation will be conducted in accordance with the Australian Privacy Act 1988 and the UK Data Protection Act 2018. OAIC and ICO may work with other data protection authorities who have raised similar concerns.
In a statement provided to Engadget, Clearview AI CEO Hoan Ton-That said:
“Clearview AI searches publicly available photos from the internet in accordance with applicable laws. It is used to help identify criminal suspects. Its powerful technology is currently unavailable in UK and Australia. Individuals in these countries can opt-out. We will continue to cooperate with UK’s ICO and Australia’s OAIC.”
Facial recognition as a whole is facing increased scrutiny in the US. IBM has stopped working on the tech due to human rights concerns. Amazon placed a “moratorium” on police use of its tech, and Microsoft says it won’t sell facial recognition software to police without federal regulation — though reportedly, Microsoft attempted to sell its tech to the DEA. Police in San Diego and Boston won’t use facial recognition, and New York City passed a NYPD surveillance oversight bill. Meanwhile, in Detroit, facial recognition has already led to one wrongful arrest.