The Recent News Story About A Tech & Privacy Issue We Should All Be Aware Of
How The Government Is Using One Company To Track Your Every Move
Facial recognition technology has been around for a long time, with common applications of consumer product integration and security use cases.
Similar to the personalized nature of a human fingerprint, each of our faces carries with it personal distinctions and characteristics that are unique to us.
Concerns about the use of facial recognition technology have existed since the first applications of it were implemented. Since then, fears of improper use and questionable reliability have only seemed to increase.
How will the data is being used? Will I be tracked in public? Will the data be used for any other purposes beyond law enforcement? These are just some of the initial questions that cause anxiety among citizens.
Governments at the city, state, and federal levels have generally taken an inconsistent approach to this topic. For example, San Francisco became the first city to ban the use of facial recognition by local agencies or law enforcement while other cities and states haven’t communicated any formal stance on how this technology could be used.
Some of the biggest names in tech, meanwhile, have come out strongly against the use of facial recognition by law enforcement. Amazon, IBM, and Microsoft are just some of the names that have taken a public stance.
However, one company that you’ve probably never heard of is doing the exact opposite and is working with government agencies in ways that are unclear.
That company is Clearview AI, a facial recognition technology used by law enforcement to analyze people’s faces and photos to track all the places where your face may have appeared, both online and offline.
The company has the ability to take one photo of you, like a publicly available Instagram photo for example, and alert law enforcement to all the places where you may have been, by scouring endless numbers of security cameras, databases, or social media platforms.
This brings us to the recent news that you should be aware of.
Reports have emerged over the last few days that show large government agencies like The U.S. Immigration and Customs Enforcement (ICE) agency has signed a contract with Clearview AI.
According to publicly available data, the size of the contract is at least $224,000, and this represents just one of the large agencies using data from the company. Other reports outline that as many as 600 other law enforcement agencies, many of which may be small local agencies, have started using the company’s service over the last year alone.
So why is this a concern and why should average people like you or I care about this?
For context, public concerns about who is behind Clearview AI, what type of oversight they have, and how they are aggregating data has stirred concerns since the company first started operating.
A simple Google search of the company will yield a number of headlines, many focused on the controversial nature of what they do and how little is known about them.
The core points of controversy with this company include people being unaware of how their image is being used, how accurate their algorithm is, what false arrests this could lead to, and how law enforcement agencies are actually relying on this questionable data.
I’ve been familiar with the company for quite some time through tech blogs but I recall becoming really concerned after listening to the well-researched podcast episode from The New York Times’ ‘The Daily’ series earlier this year.
In fact, under the California Consumer Privacy Act I requested a full report of any and all profile data that Clearview AI has on me and I was met with pushback. Because my driver’s license at the time was from Georgia (which was the last state I lived in), they were able to use this loophole to avoid my request.
Since then, class action lawsuits have been filed against the company and countless privacy experts have rung the alarms that Clearview AI may not be a suitable steward of such sensitive data.
In an effort to make sure this post doesn’t drag on any longer than it should, I will wrap up by suggesting you do three things: 1) become more aware of what personally identifiable data you are making available online, 2) request a report of your profile data from Clearview AI if you live in California, Illinois, Canada or the European Union and 3) learn more about your privacy rights.