What are we going to do about all the cameras?
To be honest, this question keeps me up at night, in something like terror.
Cameras are the defining technological advance of our age. They are the keys to our smartphones, the eyes of tomorrow’s autonomous drones and the FOMO engines that drive Facebook, Instagram, TikTok, Snapchat and others. Cheap, ubiquitous, viral photography has fed social movements like Black Lives Mother, but cameras are already prompting more problems than we know what to do with – revenge porn, live – stream terrorism, YouTube reactionaries and other photographic ills.
And cameras aren’t done. They keep getting cheaper and – in ways both amazing and alarming – they are getting smarter. Advances in computer vision are giving machines the ability to distinguish and track faces, to make guesses about people’s behaviors and intentions, and to comprehend and navigate threats in the physical environment. In China for example, smart cameras sit at the foundation of an all – encompassing surveillance totalitarianism unprecedented in human history. In the West, intelligent cameras are now being sold at cheap solutions to nearly every private and public woe, from catching cheating spouse and package thieves to preventing school shootings and immigration violations. I suspect these and far more uses will take off, because I have gleaned one ironclad axiom about society: If you put a camera in it, it will sell. That’s why I really worry that we are stumbling dumbly into a surveillance state. And that is why I think the only reasonable thing to do about smart cameras now is to put a stop to them.
Just recently, San Francisco’s board of supervisors voted to ban the use of facial – recognition technology, especially its use by law – enforcement agencies.
We might still decide, at a later time, to give ourselves over to cameras everywhere. But let’s not jump into an all-seeing future without understanding the risks at hand.
So, what are the risks?
Detroit for example as well as several other cities in the US are moving quickly, and with little public notice, to install Chinese – style “real time” facial recognition systems. In Detroit, the city signed a 1 million US-Dollar deal with DataWorks Plus, a facial recognition vendor, for software that allows for continuous screening of hundreds of private and public cameras set up around the city – in gas stations, fast food restaurants, churches, hotels, clinics, addiction treatment centers, affordable housing apartments, and schools. Faces caught by the camera can be searched against Michigan’s driver’s license photo database. Researchers also obtained the Detroit Police Department’s rules governing how officers can use the system. The rules a broad, allowing police to scan faces “on live or recorded video” for a wide variety of reasons, including to “investigate and for corroborate tips and leads.” Detroit’s police chief disputes so far any “Orwellian activities”, adding that he took “great umbrage” at the suggestion that the police would “violate the rights of law-abiding citizens.”
To be honest, I am less optimistic. Face recognition provides law enforcement a unique ability that they’ve never had before. And that is please understand the ability to conduct biometric surveillance – the ability to see not just what is happening on the ground but who is doing it. This has never been possible before. Please realize that we’ve never been able to take must fingerprint scans of a group of people in secret. We’ve never been able to do that with DNA either. Now they can through face scans.
That ability alters how we should think about our privacy in public spaces – our spaces. In my opinion it has chilling implications for speech and assembly protected by the First Amendment: it means logically that the police can watch who participates in protests against the police and keep tabs on them afterward.
You might think I am crazy and exaggerating.
In fact, this is already happening. In 2015 for example, when protests erupted in Baltimore over the death of Freddie Gray while in police custody, the Baltimore County Police Department used facial recognition software to find people in the crowd who had outstanding warrants – arresting them immediately, in the name of public safety. But there’s another important wrinkie in the debate over facial recognition. Let’s assume that for all their alleged power, faces scanning systems are being used by the police in a rushed, sloppy way that should call into question their results.
Here’s one of the many crazy stories:
In the spring of 2017, a man was caught on a security camera stealing beer from a CVS store in New York. But the camera didn’t get a good shot of the man, and the city’s face – scanning system returned no match. The police, however, were undeterred. A detective in the New York Police Department’s facial recognition department thought the man in the pixelated CVS video looked like the actor Woody Harrelson. So the detective went to Google Images, got a picture of the actor and ran his face through the face scanner. That produced a match, and the law made its move. A man was arrested for the crime not because he looked like the guy caught on tape but because Woody Harrelson did. Devora Kaye, a spokeswoman for the New York Police Department declared, that the department uses facial recognition merely as an investigative lead and that “further investigation is always needed to develop probable cause to arrest” she added that “the NYPD constantly reassesses our existing procedures and in line with that are in the process of reviewing our existent facial recognition protocols.
This sort of sketchy search is routine in the face business. Face – scanning software sold to the police allows for easy editing of input photos. To increase the hits they get on a photo, the police are advised to replace people’s mouths, eyes and other facial features with model images pulled from Google. The software also allows for “3D modeling”, essentially using computer animation to rotate or otherwise change a face so that it can match a standard mug-shot photo. The most troubling thing about all of this is in my opinion that there are almost no rules governing its use.
In a bizarre twist, some police departments are even pushing the use of facial recognition on forensic sketches:
They will search for real people’s faces based on artists’ renderings of an eyewitness account, a process riddled with the sort of human subjectivity that facial recognition was supposed to obviate. Just imagine: If we were to find out that a fingerprint analyst were drawing in where he thought the missing lines of a fingerprint were, that would be grounds for a mistrial. But people are being arrested, charged and convicted based on similar practices in face searches. And because there are no mandates about what defendants and their attorneys must be told about these searches, the police are allowed to act with impunity. None of this is to say that facial recognition should be banned forever – don’t get me wrong. The technology may have some legitimate uses. But it also poses profound legal and ethical quandaries. What sort of rules should we impose on law enforcement’s use of facial recognition.
What about on the use of smart cameras by our friends and neighbors, in their cars and doorbells?
In short, who has the right to surveil others – and under what circumstances can you object?
It will take time and careful study to answer all these questions and putting the puzzle together. But we have time. There’s no need to rush into the unknown, because if we continue to develop technology like facial recognition without wisdom and prudence, our servant may prove to be our executioner.