Are Video Doorbells Using Us as Security Guinea Pigs?

By on March 18th, 2021 in Artificial Intelligence (AI), Blog Posts, Ethics, Human Impacts, Privacy & Security, Social Implications of Technology, Societal Impact

 

A Guest SSIT FutureProof Blog Post

by Chey Cobb

 

Reading the recent headline “New Surveillance Video Shows Horrific Attack on Lady Gaga’s Dog Walker,” your first thought might be, I hope the dog walker and dogs are okay. A second thought could be — a home security camera captured the attack on video? And if you’re a public interest technologist, you may pause to consider the implications for society of home security cameras. Unfortunately, society has tended to trust these devices unreservedly. Let’s consider some of the implications of the public’s trust in Internet-connected surveillance cameras.

Going Gaga Over Surveillance?

While surveillance cameras are not a new technology, some video surveillance systems now possess capabilities such as AI-based video analytics and cloud-based live-streaming, with content sharing enabled at scale. In America, these technologies are are being deployed at a rapid pace in residential areas, by homeowners and housing associations, landlords and tenants, often with the encouragement of law enforcement.

This growing domain of Internet-enabled residential property surveillance often takes the form of “video doorbells” like the one that recorded the attack on Lady Gaga’s dog walker. Research by Strategy Analytics in 2019 showed the market leaders are Amazon’s Ring at 40%, and Google’s Nest at 24%. The rest of the market is made up of multiple players all with less than 10% share.

Video doorbells and related technologies, along with the data they generate, will continue to be abused, undermining the security of what is being pitched as a security technology.

That same research indicated a major factor in a person’s decision to install a video doorbell was “feeling more secure.” In fact, Amazon capitalizes on that sentiment in their pitch for Ring doorbells that proclaims: “Smart Security Starts at the Front Door.” One ad for Google’ Nest Hello device reads: “Say hello to a whole new kind of security.” Although we don’t know which brand of video doorbell captured the attack on the dog walker, it is important to note that it did not belong to Lady Gaga or her dog walker; it was a device located on the front porch of one of her neighbors. These devices often have a field of vision that includes parts of the front yard, the sidewalk, the street beyond the property boundary, and properties across the street.

Depending on a neighborhood’s architecture and property layout it can be almost impossible to install video doorbells that don’t capture video of people beyond a property’s boundary, the implications of which will be addressed in a moment.

Securing Society with AI?

Most video doorbells offer some form of event recognition, alerting, and recording capabilities. They can detect someone walking past your house and alert you via text message, even as they capture video of the passer-by, store it in the cloud, and send a link to you to review. Some systems utilize live-streaming and make it relatively easy to share the video with friends, neighbors, curtain-twitching forums, and law enforcement. (Similar capabilities are also provided by external home surveillance cameras from companies like WYZE and Arlo.) Amazon’s Ring enables a free, app-based “neighborhood watch” feature — “Neighbors” — that alerts you to crime and safety events in a radius up to five miles around your home.

The companies behind these products are firmly committed to deploying AI-powered video analytic capabilities and behavior prediction, even though current claims of AI-based detection are to some extent a marketing euphemism for machine learning (ML). The American Civil Liberties Union (ACLU) published a 50-page report entitled “The Dawn of Robot Surveillance” in 2019 that examined major social implications of this technology. The current approach turns customers, and their neighbors, into unwitting security guinea pigs.

Predetermination of Guilt?

Imagine a screen that displays condensed video-doorbell highlights of all the people passing through that neighborhood in a given time period. Imagine they have been analyzed by facial recognition and labeled with direction and speed of travel, as well as predictions of emotional state and intentions. Filters on the system could be adjusted to show, for example, all males that don’t live in the neighborhood, are tagged as “angry,” appear to be non-white, and are heading towards the park.

Such filters can be built with the technology now being deployed by millions of households in America. Commercial interests are racing to create refinements and extensions of this technology. Two keys to success in this race: access to a large installed base of cameras, and massive amounts of their video content for use as AI training data.

But how many of the consumers buying video doorbells and smart home surveillance camera are aware of these implications? Security is a driver for buyers, but sellers are often driven more by their huge appetite for data, market domination, and profits, than by concerns for public safety. Even folks who are aware of those drivers may not see them as warning signs. But experts in cybersecurity, data privacy, and risk management definitely do.

Sadly, the temptation to try and own a technology market by releasing wave after wave of hardware and software that is vulnerable to abuse has proven irresistible for many companies over the past few decades. We are all familiar with the incessant cycle of product updates, patches, and security fixes after breaches, vulnerability disclosures, and cyberattacks. The current lack of transparency at every level makes it impossible for consumers and cyber security pros to know what’s really happening “under the hood.” The entire enterprise is the opposite of transparent.

We are not just unpaid beta testers of products, but also unwitting guinea pigs in a massive social experiment.

Given the lack of effective standards, oversight, and product certification that the tech industry has enjoyed — a situation upon which currently pending legislation around Internet of Things (IoT) and data privacy is unlikely to have much impact — it seems reasonable to hypothesize that surveillance technologies, along with the data they generate, will continue to be abused, undermining actual security, and eroding the trust that people have in tech companies.

As for the wider implications of American consumers building a mass surveillance system that feeds the appetite for AI-based law enforcement that exists in some quarters, it is worth noting that this is less likely to happen in other countries, but has resoundingly succeeded in others.

Installing a video doorbell that captures images of people outside the boundary of your private domestic property — like neighbors’ homes or gardens, shared spaces, public sidewalks, or a street — also runs afoul of legal ordinances in many jurisdictions. In the U.K. and EU the General Data Protection Regulation (GDPR) applies and owners of video doorbells may have to respond to requests to delete footage of people if they request it. Owners also have a legal responsibility to secure the system and post warnings about what the system records.

In closing, it is worth noting that, according to many criminologists, the value of surveillance systems in crime reduction is nowhere near as great as purveyors of such systems proclaim in their “benefit to society” marketing and PR programs. While some studies have shown a drop in crime after surveillance systems are installed, this usually proves to be temporary, so there is little long term crime reduction. This same pattern can be seen in other attempts to reduce crime that fail to address its root causes. Society needs a comprehensive approach to crime reduction, not just more hardware and an app. And we should all resist becoming guinea pigs in experiments run by tech firms, even when they say it’s for our own good.

More Information

How to secure your Ring camera and account
https://www.theverge.com/2019/12/19/21030147/how-to-secure-ring-camera-account-amazon-set-up-2fa-password-strength-hack

Ring security camera settings
https://www.wired.co.uk/article/ring-security-camera-settings

Video doorbell security: How to stop your smart doorbell from being hacked
https://www.which.co.uk/reviews/smart-video-doorbells/article/video-doorbell-security-how-to-stop-your-smart-doorbell-from-being-hacked-aCklb4Y4rZnw

How the WYZE camera can be hacked
https://learncctv.com/can-the-wyze-camera-be-hacked/

How to secure your WYZE security camera account
https://www.cnet.com/how-to/wyze-camera-data-leak-how-to-secure-your-account-right-now/

How to protect ‘smart’ security cameras and baby monitors from cyber attack
https://www.ncsc.gov.uk/guidance/smart-security-cameras-using-them-safely-in-your-home

Yes, your security camera could be hacked: Here’s how to stop spying eyes
https://www.cnet.com/how-to/yes-your-security-camera-could-be-hacked-heres-how-to-stop-spying-eyes/

To understand how hackers look for vulnerabilities in digital devices, check out this article at Hackaday: https://hackaday.com/2019/03/28/reverse-engineering-a-modern-ip-camera/. It links to a cool, four-part reverse engineering exercise by Alex Oporto: https://dalpix.com/reverse-engineering-ip-camera-part-1

 

Author Information

Chey Cobb is a public interest technologist who has worked in private industry, education, and highly classified environments. Her current focus is on the dangers of commercial surveillance tech, unethical tech development, and the laws of unintended consequences. She also makes killer guacamole.