Canadian RCMP admitted they’ve been secretly using Clearview AI facial recognition software. This as the company faces class action lawsuits in the US.
Clearview AI, the company that uses facial recognition software to scour social media, is quite controversial. Not only is the method of scraping faces from social media without users permission controversial, but the companies tactics to try and push their software has been catching controversy as well.
In one case, the American Civil Liberty Union (ACLU) slammed the company for attempting to manufacture endorsements from them. From Buzzfeed:
Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, has been telling prospective law enforcement clients that a review of its software based on “methodology used by the American Civil Liberties Union” is stunningly accurate.
“The Independent Review Panel determined that Clearview rated 100% accurate, producing instant and accurate matches for every photo image in the test,” read an October 2019 report that was included as part of the company’s pitch to the North Miami Police Department. “Accuracy was consistent across all racial & demographic groups.”
But the ACLU said that claim is highly misleading and noted that Clearview’s effort to mimic the methodology of its 2018 facial recognition study was a misguided attempt in “manufacturing endorsements.”
“The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands,” ACLU Northern California attorney Jacob Snow told BuzzFeed News, which obtained the document through a public records request.
Clearview’s announcement that its technology has been vetted using ACLU guidelines is the latest questionable marketing claim made by the Manhattan-based startup, which has amassed a vast repository of biometric data by scraping photos from social media platforms, including Facebook, Instagram, Twitter, YouTube, and LinkedIn. Among those claims, Clearview AI has told prospective clients that its technology was instrumental in several arrests in New York, including one of an individual involved in a Brooklyn bar beating and another of a suspect who allegedly planted fake bombs in a New York City subway station. The NYPD denied using Clearview’s technology in both of these cases.
While some are questioning the marketing side of its business practices, some are questioning the legal aspect of the business as well. Some of the actions of Clearview AI, some contend, is illegal under Illinois law. In fact, Facebook already got fined $550 million for breaking the referenced laws. Now, Clearview AI is facing a class action lawsuit citing the Facebook case as a precedent. From TechCrunch:
Just two weeks ago Facebook settled a lawsuit alleging violations of privacy laws in Illinois (for the considerable sum of $550 million). Now controversial startup Clearview AI, which has gleefully admitted to scraping and analyzing the data of millions, is the target of a new lawsuit citing similar violations.
Clearview made waves earlier this year with a business model seemingly predicated on wholesale abuse of public-facing data on Twitter, Facebook, Instagram and so on. If your face is visible to a web scraper or public API, Clearview either has it or wants it and will be submitting it for analysis by facial recognition systems.
Just one problem: That’s illegal in Illinois, and you ignore this to your peril, as Facebook found.
The lawsuit, filed yesterday on behalf of several Illinois citizens and first reported by Buzzfeed News, alleges that Clearview “actively collected, stored and used Plaintiffs’ biometrics — and the biometrics of most of the residents of Illinois — without providing notice, obtaining informed written consent or publishing data retention policies.”
Not only that, but this biometric data has been licensed to many law enforcement agencies, including within Illinois itself.
All this is allegedly in violation of the Biometric Information Privacy Act, a 2008 law that has proven to be remarkably long-sighted and resistant to attempts by industry (including, apparently, by Facebook while it fought its own court battle) to water it down.
So, to say Clearview AI is controversial might be an understatement to some. Yet, while the startup is facing considerable backlash in the US, it seems the company is now the subject of a privacy debate in Canada. It turns out, the Royal Canadian Mounted Police (RCMP) have been quietly testing the technology in Canada. Initially, they were asked if the force uses the technology, but the RCMP denied it. Now, the force is admitting that they, indeed, have been quietly using the controversial technology. From the CBC:
Toronto police have admitted some of their officers have used Clearview AI — a powerful and controversial facial recognition tool that works by scraping billions of images from the internet — one month after denying using it.
Spokesperson Meaghan Gray said in an email that some members of the force began using the technology in October 2019. She did not say what for or how many times it had been used.
Chief Mark Saunders directed those officers to stop using the technology when he became aware of its use on Feb. 5, she said. Gray did not say who originally approved the use of the app.
Clearview AI can turn up search results, including a person’s name and other information such as their phone number, address or occupation, based on nothing more than a photo. The program is not available for public use.
Gray said officers were “informally testing this new and evolving technology.” She did not say how the chief found out.
The article goes on to say that the Toronto RCMP police have been told to stop using the software pending a legal question with the Ontario Privacy Commissioner and the Crown’s Attorney office. That question revolves around whether it is an appropriate tool to use for law enforcement.
Facial recognition technology has been quite controversial for some time. In 2018, Amazon began testing its own facial recognition software. The ACLU tested it by feeding all the pictures of the members of Congress. The idea is that every one should not be considered a criminal. This is to check to see if it is accurate. The software then came back with 28 positive matches with disproportionate number of those matches being people of colour. From the ACLU at the time:
Amazon’s face surveillance technology is the target of growing opposition nationwide, and today, there are 28 more causes for concern. In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.
The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance.
To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza.
Suffice to say, there is a whole array of privacy and security concerns with the technology. The fact that this is now spreading into Canada will likely be a concern to Canadians. This especially given all the controversy the company has faced in the US already.
Drew Wilson on Twitter: @icecube85 and Facebook.