

In the Times report and in documents obtained by BuzzFeed News, Clearview AI said that its facial recognition software had been used by more than 600 police departments and government groups, including the FBI. He declined to provide any further details. In response, Ton-That said the NYPD has been using Clearview on a demo basis for a number of months. While Clearview has claimed associations with the country’s largest police department in at least two other cases, the spokesperson said “there is no institutional relationship” with the company. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.” “The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. But the NYPD says this account is not true. Clearview AI’s website also takes credit in a flashy promotional video, using the incident, in which a man allegedly placed rice cookers made to look like bombs, as one example among thousands in which the company assisted law enforcement. However, emails, presentations, and flyers obtained by BuzzFeed News reveal that its claims to law enforcement agencies are impossible to verify - or flat-out wrong.įor example, the pitch email about its role in catching an alleged terrorist, which BuzzFeed News obtained via a public records request last month, explained that when the suspect’s photo was “searched in Clearview,” its software linked the image to an online profile with the man’s name in less than five seconds. It’s raised fears that a much-hyped moment, when universal facial recognition could be deployed at a mass scale, is finally at hand.īut the company, founded by CEO Hoan Ton-That, has drawn a veil over itself and its operations, misrepresenting its work to police departments across the nation, hiding several key facts about its origins, and downplaying its founders' previous connections to white nationalists and the far right.Īs it emerges from the shadows, Clearview is attempting to convince law enforcement that its facial recognition tool, which has been trained on photos scraped from Facebook, Instagram, LinkedIn, and other websites, is more accurate than any other on the market. But there’s just one problem: The New York Police Department said that Clearview played no role in the case.Īs revealed to the world in a startling story in the New York Times this weekend, Clearview AI has crossed a boundary that no other tech company seemed willing to breach: building a database of what it claims to be more than 3 billion photos that can be used to identify a person in almost any situation. It’s a compelling pitch that has helped rocket Clearview to partnerships with police departments across the country. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.
#Clearview software cracked
Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds.
