Use of Clearview AI Technology Causes Privacy Concerns in Australia

Things seemed to have quieted down with respect to Clearview AI. There might be a reason for that. The technology is being pushed in Australia.

It’s caused widespread controversy in both Canada and the US, but lately, it seems that things have gotten quiet with respect to facial recognition software. It turns out, the technology is now causing controversy in Australia. Amidst a major privacy debate going on in the country, it seems that Clearview AI is finding itself once again a focal point in the debate.

A report from Independent Australia is offering not only a glimpse into how the technology is being deployed, but also how Clearview AI is selling the technology. From the report:

Federal and state Australian police forces are now using technology developed by Clearview AI, a controversial U.S.-based facial recognition company that mines information from social media and boasts a database of over three billion images, scraped from sites such as Twitter, Instagram, Facebook and Linkedin.

At first glance, the use of advanced facial recognition in police work seems reasonable. In the U.S., Clearview has claimed astonishing results and that their software can only be used by police forces.

Now, this first claim might strike long term observers as bizarre. This is because, just last month, we saw reports surface on how, in the US, the technology was being used by the ultra wealthy for fun at parties. At the time, Clearview AI defended themselves by suggesting that it was no big deal and that they were just “trial accounts”. Additionally, they claimed that these trial accounts were given out responsibly. So, the sudden claim that the technology in the US can only be used by police forces already doesn’t jive with what we’ve seen.

In the same month, New York residents also delivered a petition to city council demanding a ban on facial recognition software. This is likely not the “astonishing results” that Clearview AI is talking about.

In another section of the story, we see something else:

As reported by Kashmir Hill in the New York Times, although the technical capability to identify “everyone” based on facial recognition has been available for some time, tech companies held off on the release of such a tool, fearful of the “Pandora’s Box” of privacy issues that would follow.

Clearview was the first to put aside such qualms.

The uptake was immediate. Clearview says over 600 law enforcement agencies and the U.S. Department of Defense have commenced using the technology. Canadian police are conducting trials. Soon the world’s police will have access.

The claim that Canadian police are still conducting trials doesn’t jive with what we’ve been seeing here in Canada. From out vantage point, the trials were conducted in secret. When news surfaced that Canadian RCMP were using the software in the first place, it became one of the most explosive privacy stories we’ve seen in a while in Canada. Shortly after the story broke, the Canadian RCMP were quick to take to the media airwaves to say that they have stopped using the software. In response to the firestorm that erupted, the RCMP said that they have gone to the privacy commissioner and crown council to ask if the use of the software is allowed.

In response, the Ontario privacy commissioner quickly responded by saying that he called upon the RCMP to immediately halt the use of the technology. From there, the story only further became explosive when four other privacy commissioners said that they were joining in on the investigation. The question top of mind for them is whether or not the RCMP violated any relevant privacy laws both at the provincial level and at the federal level.

As for the US side of things, the picture really isn’t that much better. Clearview AI is facing a lawsuit in Illinois. The privacy laws breached were already earlier tested when Facebook was fined $550 million. Essentially, things aren’t looking all that great from a legal perspective for Clearview AI. So, the image that things are rolling along smoothly for the software is full of holes from what we’ve seen. Something tells us that all of that was not part of the sales pitch in Australia.

At the very least, the article does point out the data breach of Clearview AI where their entire client list was stolen. That, of course, put into question the ability for the company to secure their information. That is a particularly thorny subject given the privacy implications the companies software does tend to raise amongst critics.

So, with the company seemingly making a push into Australia, there are people already raising some questions about the use of the software. As Australian’s research more and more about what this company has been up to in other countries, that will likely raise further questions as well.

The unfortunate thing for privacy-minded Australian’s, however, is the fact that the government has been very keen on stamping out personal privacy. A great example is Australia’s notorious anti-encryption laws which seem to breach international laws. It’s quite possible that this situation is partly why Clearview AI took a keen interest in selling its software in the country in the first place. As a result, things may only get more bumpy for privacy rights in Australia in the weeks and months ahead.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top