“It is a profound and necessary truth that the deep things in science are not found because they are useful; they are found because it was possible to find them.”– Robert Oppenheimer
In any given week, any given month of the modern era, you’d be forgiven for thinking that the most important issues of our time concern a small roster of big-name political personalities. Donald Trump, Nancy Pelosi, Joe Biden, Alexandra Ocasio-Cortez, Mitch McConnell, etc. Those are the names, particularly the first, that dominate every headline and fill acres of newsprint both on and offline. And the daily decisions of those people certainly have an outsized impact–even if it’s often only on the mental states of the people who live in this country. But if a future person were attempting to analyze our current existence, referencing only those endless reams of newsprint and untold hours of pundit-laden news footage, they would be unable to describe even the barest details of our lives, or just how our era developed those tools that would, in time, shape a world of trouble yet to come.
In truth, we live inside an invisible infrastructure. Invisible because it is digital, and also because it’s rarely discussed. And it’s being built continually around each of us, without our conscious participation. For instance, you are reading this article on an internet-connected device. How did you reach it? Did you click through Facebook? Or Google? Your footprints to this page were recorded. Even if you arrived without the help of a search engine or social media, your Internet Service Provider knows you’re here and it’s within its rights to sell that information to anyone who will buy.
Does the device you’re using have a camera on it? If you’re using a laptop, like I am to write this, you may be savvy enough to have placed a sticker or piece of tape over your webcam. But perhaps not. If you’re reading on your phone, you certainly have a camera too, probably two cameras–one facing the world around you, and one directed at you. At the moment, they probably aren’t transmitting anything, but did you forget they were there? What are those cameras seeing right now, and what companies could exercise the ability to access that image, should they want to? Remember, your phone has a microphone too, and a gps locator. What apps are currently recording your location, your browsing patterns, your contact history?
What other things around you are connected to the internet? Do you have any security cameras watching you? Those are almost certainly internet-connected, storing images on a server somewhere. Is there a “smart” speaker, like an Echo or a Nest nearby? A “smart” television set or thermostat? Does your neighbor have a Ring doorbell, recording the image of your car leaving and returning to your home at regular intervals? These things are around you all the time—particularly your phone, which likely connects your personal, financial, and working lives into one place. Ask yourself whether you’ve spent more than an uneasy moment here and there questioning the stream of data continuously collected and stored about you in databases around the world. Ask yourself how often it’s being discussed on the news, or among your friends. In twenty years, it’s unlikely you’ll remember the ridiculous thing Trump said yesterday. But what will that stream of data have evolved into, by then?
Any change that brings more convenience to your life requires a corresponding sacrifice of personal information. Credit cards are an obvious example. They’re much simpler than carrying cash around, but as you use them, they use you back. They create a record of every purchase, stored both by your bank and also by the seller, which outlives each transaction. And while banks may follow certain privacy rules, retailers are free to sell records of customer purchases to data brokers, who profile you for advertising purposes. We can assume, as more options for cashless purchasing take hold in upcoming years, even greater amounts of data will be scooped up in the process. And self-checkout lanes will proliferate, with multiple cameras built into each screen. We can presume the existence of increasingly high-definition visual records of each transaction, tied not only to your credit card but to your face.
Similarly, any “smart” device that you bring into your home—from television sets to refrigerators, bluetooth speakers and thermostats—will store information on you and your habits. This data will be used, at least initially, for the primary purpose of advertising. And, over time, each new stream of data will combine with the other existing streams to help those companies, and the data brokers who mine them, build a more accurate profile of you as a consumer.
Somewhat more alarming than all this, if only because it already lacks the ubiquitous “advertising” motivation, is the Ring Doorbell, owned by Amazon. These ‘Doorbells’ record video of the scene outside the user’s door. Over 600 police departments already access footage from those Doorbells, via the company’s Neighbors app. This footage can, given the orientation of the Doorbell, provide detailed information about the user’s comings and goings, as well as the behavior of any other homeowners within view, any children playing outside, and any dog walkers or pedestrians who happen to cross its path. The company has plans for adding facial recognition to its services, though its adoption will be “thoughtful,” according to an Amazon VP. It will be rolled out, of course, according with the express consent and desire of consumers.
Everything that’s collected about us is, supposedly, taken with our consent—when we shop, or visit Facebook, or type information into an app. But the question of how much consent you can meaningfully provide, particularly in cases like the Ring doorbell, is murky. As a homeowner, you may be comfortable with livestreaming video of yourself and your family through Amazon’s servers. You may even be comfortable with the local police department storing video from your front door. But can you provide the consent of every person who is captured by that video camera? Every kid who visits your kids. Every couple you invite for dinner.
And the camera’s little eye looks farther than the front stoop. Your neighbor who sneaks the occasional cigarette behind his garage might not want to be captured on video. The mom next door might not like her little kids’ playtime being recorded and blasted out over the Neighbors app. An elderly person down the street might not like your knowing that he’s hired a home health aide to come by at regular intervals; another might be embarrassed to have her mobility problems captured on your camera. Then add facial recognition into the mix. Your door could identify–for you, Amazon, and the police–precisely who visits your neighborhood, at any hour, for any purpose. At that point, the question of consent will be entirely out the window. It may be your door. But it’s their life.
You could argue, and some do, that the whole idea of privacy is already passé. For the most part, we’ve gotten used to the idea that city streets will be blanketed with cameras. Stores have security cameras. Streets have traffic cameras. A few of us—myself included—grumble regularly about the sheer volume of them, and the sense of surveillance they inspire. But still, the number of cameras dwarfs itself with each passing year, and few complain.
A while back, New York City hired a company named LinkNYC to begin replacing its grungy, long-ignored payphones with wifi kiosks. They have installed nearly 1,800 since 2016 on streets all over the five boroughs. Each broadcasts a free wifi signal, provides charging ports, and contains 30 sensors and three cameras overlooking pedestrian traffic. That’s nearly 5,500 extra cameras on the sidewalks in the last few years. And the company already has contracts in the works with other major cities. Is it so awful? No one seems to ask about the future, when those cameras are updated to include facial recognition. Will there be a firestorm of anger when ‘pedestrian traffic’ becomes Bob-Anna-Ramon-Maria-Phillipe traffic? At that point, the City and LinkNYC could maintain a seamless history of each pedestrian’s travel, wherever those individuals go. It isn’t a conspiracy theory. The question is going to come up, sooner rather than later.
A startup named Clearview made waves in tech news recently, as journalists at the New York Times learned of its unprecedented use of facial recognition technology. The company has, over the past few years, built a database of more than three billion images tied to individual identities. They scraped images and profiles without consent from public sites like Facebook, Youtube, and Venmo. By applying facial recognition to those images, the startup created a searchable tool for law enforcement, which can pair an uploaded image—a still from a grainy security camera, for instance—to a particular individual, and provide officers with other images of that person from around the web.
What separates the Clearview app from existing law enforcement tools is its scale. We’ve known for a while that certain states and the FBI maintain databases of DMV images and mugshots for use with facial recognition. That the FBI is using facial recognition, (courtesy of Amazon,) to scan through surveillance footage. We’ve known that Border Security is beginning to facially scan everyone who crosses into the country, and that airports are ramping up their own facial recognition tools.
And if, by the way, you didn’t know these things, this is a good time to ask yourself why that is. Why isn’t this considered as important as, for instance, the family squabbles of the British monarchy? Those of us who spend our time monitoring these technological advances have seen a number of troubling developments in recent years, but now this Clearview startup is a massive leap into new territory.
Any facial recognition technology is only as good as the database of images it’s working from. Consider how difficult you would find it to recognize someone in a crowded room whom you’d only seen in a Driver’s License picture. Computers aren’t any more skilled. From one front-facing image, like a DMV picture, it’s difficult to draw enough data to match an image that’s taken outside, at a distance, or from the side. With a database of Facebook pictures, though, in which the same person is seen from multiple angles, in multiple settings, the task becomes considerably simpler. And by compiling images from multiple sources across the web, in violation of multiple terms of service agreements, the Clearview database has grown drastically larger—and thus, considerably more effective–than any ever compiled.
Also, while Clearview is currently partnering with public entities like law enforcement, it is a private company. Its database has no form of public oversight. It can be sold to whomever. It can be used for any purpose. The app already contains programming language that can pair with augmented reality glasses—like the ill-fated Google Glass—so that a user could theoretically scan any person he saw throughout the day, run their image through the app and match their identity to a number of public online profiles containing other images and personal information. The potential for abuse is obvious, even now, with its use restricted to law enforcement and private security companies. Investors already believe the app will someday be available to the general public.
What the well-named Clearview app could bring, ultimately, is a crystal clear view into the otherwise anonymous lives of everyone around us, at all times. It could herald the end of anonymity altogether. No more anonymity on the street. Or in the drugstore. Consider, already, how many companies know when you purchase a prescription or when your location is pinged to a doctor’s office. A therapist’s office. An AA meeting. Now add to those invasions the notion that every person sitting across from you in a waiting room, every building you pass, could pinpoint exactly who you are, and list that captured moment in a central database of your movements and activities.
It sounds too extreme, when you put it like that. The complete end of anonymity. Isn’t that an overreaction? No one would go for that. Surely no one would unleash that kind of hell on the general public. We know of two companies at least—Facebook and Google—that have both developed the technology and then held it back out of fear. But unfortunately, one of the greatest lessons of history, and the history of technology in particular, is that once something becomes possible, it quickly becomes inevitable.
It may not be this app. It probably won’t be this year. It may be a decade from now, and a Chinese company rather than an American one. Not too long ago, the early critics of gene editing were told that no one would dare use CRISPR to create genetically engineered babies. That it would violate every ethical norm of science. Now, of course, someone has. And the seeds of Hiroshima and Nagasaki were planted as soon as scientists discovered the atom. The technology that erases anonymity is far too possible now to be anything but inevitable.
Even those people who tend to chant “I don’t have anything to hide” whenever some new government or corporate entity conspires against their privacy, should pause at this thought. Firstly, you don’t know who will gain access to your data in the future, and for what purpose. Will they punish you for using too much toilet paper in a public toilet? Will they question why you’re “suspiciously” walking your dog through a neighborhood? Sure, you might agree to share financial information with retailers, and location history with Google. You’d consent to quite a few things, for the sake of added convenience. But convenience is one thing. Intimidation is another. And what will your consent mean, if your only means of opting-out is never showing your face?
The abuses of facial recognition are already well-documented. The most valuable Artificial Intelligence startup company in the world is called SenseTime, and it provides the backbone of China’s surveillance system, in which minority populations like the Uighurs are targeted for re-education and enslavement, and the larger population is tethered inescapably to their “social credit rating.” In China, a computer can deny you access to transportation, can deny bank loans and job offers, can isolate you physically from your friends and family members whose own ratings are put in peril by associating with you. Yes, they have the fantastic convenience of paying for things with a smile. Is it worth it?
It’s creeping into freer societies too. In Hong Kong, last year, the police used facial recognition to jail political protesters, and protesters responded by using facial recognition to target police officers for harassment off the job. Last January, a man in London was fined by police for refusing to let them scan his face as he walked past. And in November, a U.S. Border Patrol agent took a photo of an ACLU lawyer against his consent, despite stated policies that should have allowed US citizens to opt out of facial recognition at the border.
There are as many reasons to fear the encroachment of facial recognition as there are individuals. Over a decade ago, Paul Ohm, a Professor at the University of Colorado, coined a new phrase to describe the mass data-collection activities of modern tech companies. He suggested they were building a “Database of Ruin.” He noted that, by mining browsing and shopping data for advertising purposes, tech companies had gained the ability to connect every single user to at least one terribly embarrassing or painful secret. One thing that could “ruin” them. With facial recognition, the Database of Ruin would evolve out of the depths of the digital world and follow us onto the city streets.
It’s easy to think of examples. Many people wouldn’t want to be publicly identified while, say, entering a clinic that provides abortions or a church that offers Narcotics Anonymous support groups. Add to that list the locations of protest group meetings. Church services of unpopular religious groups. Marriage Counselors. Shooting ranges. Consider what you would willingly say to a friend while talking outside, if any overhearing passerby could record your words and immediately attach them to your identity. What, in such a world, would remain of medical privacy? Of confidential sources for journalists? Knowing the prolific dossiers that are held on each of us by advertising data brokers, by Facebook and Google, and People Finders, and retailers like Amazon—why would we ever allow the physical details of our faces to be compiled into that mass of data?
There are a number of critics of facial recognition. But too many of the criticisms I’ve read resort to complaining that, so far, the technology isn’t entirely reliable. That the all-seeing eye still hasn’t gained perfect 20/20 vision. Even the New York Times reporters who broke the story on Clearview spent a few paragraphs decrying that the app’s algorithms hadn’t been vetted for reliability by independent experts. A quoted researcher from Georgetown pointed out one downside to the app’s sizable database, namely “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.” And numerous other critics have pointed out the well-documented failings of facial recognition to accurately identify people of color, particularly women of color.
Unfortunately, by returning continually to this criticism, many of the people who intend to protest the technology end up sounding more like promoters of its development. They suggest that a world governed with facial recognition technology would be improved, and not in fact, substantially degraded as the algorithms better learn to identify people.
There are some positive signs. California has passed a three-year moratorium on police using facial recognition technology in the state, and the city of San Francisco has outright banned the practice. And while federal law is woefully behind the times on every technological question, a couple lawmakers, in recent House hearings on facial recognition, seemed to understand the failings of the “it isn’t reliable” criticisms. Republican Mark Meadows, from North Carolina told the Oversight committee, “If we only focus on the fact that they’re not getting it right with facial recognition, we missed the whole argument, because technology is moving at warp speeds…We’re talking about scanning everybody’s facial features, and even if they got it 100 percent right, how should that be used?” Democrat Gerry Connolly, from Virginia, agreed. “Irrespective of its accuracy, there are intrinsic concerns with this technology and its use.” In the Senate, a recent bill proposed by Utah’s Mike Lee and Delaware’s Chris Coons would require federal investigators to get a warrant before using long-term facial recognition surveillance against a suspect. And while privacy rights activists say the bill contains considerable loopholes, at least it suggests that certain influential people are considering the issue.
As for the rest of us? We need to pay attention. Remember, there are far more important issues in our lives than who’s yelling at whom on TV. The media follows clicks and ratings, unfortunately, so if we want technology news to make the front pages, we all need to read technology news. We need to inform our friends and neighbors of the issues with self-checkout lanes and Ring doorbells. We need our City Councils to be aware of the dangers of increasing the numbers of cameras on the streets. We need to stop bringing “smart” appliances into our homes, and stay conscious of the flow of data that is created every time we interact with the digital world. Most importantly, we need to be talking about privacy, and what we’re willing to sacrifice on the altar of convenience.
Any time you’re presented with a more efficient, frictionless way to do anything, ask yourself what you’re giving up in the process. And if you’d like to have some fun with the brokers and tech juggernauts, I’d highly advise taking a monkey wrench to the invisible infrastructure around you. Behave unpredictably online. Make yourself difficult to profile. Conflict with your own data. Start by looking for the “Do Not Track” setting in your browser. Turn off the location access for your apps. If you’re using Google, research topics wildly outside your normal interests. Shop less online, and avoid the self-checkout.
You want to really drive them nuts? Use cash.
Tonya Audyn Stiles is Co-Publisher of the Canyon Country Zephyr.
To comment, scroll to the bottom of the page.
Zephyr Policy: REAL NAMES ONLY on Comments!
Don’t forget the Zephyr ads! All links are hot!