Smart Home Security: How Safe is Your Data?

By on May 25th, 2020 in Editorial & Opinion, Human Impacts, Magazine Articles, Privacy & Security, Social Implications of Technology, Societal Impact

It is estimated that there are 59.3 million smart homes globally. Smart homes are defined as homes where there are smart appliances present. These can be coffee machines, vacuum cleaners, or voice activated digital assistants [1]. Smart appliances, or smart devices, are advertised to consumers for their convenience [2]. They are designed to be easy to use and reviews of these devices are centered around their interactive features [3]. Even respected technology reviewers like those at Which? magazine address the security of smart devices in separate articles, so users have to be interested in order to find the information [4]. This division of security and convenience makes it difficult for the average user to determine the security of their device before purchase, but it is just as important to know how secure a device is as it is whether it’s easy to use [5].

Security threats to smart devices are not just from hacking, but also from a lack of control over data access.

Typically, physical or personal security is a concept that societies instill in their members from childhood. Information and advice about how to stay safe from strangers, how to safely cross the road, and how to safely travel is readily available. By the time individuals reach adulthood they are well aware of potential risks to their safety and how to combat them. They have established a mental model of how to stay safe.

Due to recent advances in technology, there has also increasingly been advice about digital security and how to stay safe online. To maximize penetration of this advice through all layers of society, it is not just targeted at children but at adults as well. Technology is constantly developing and changing, and as a result, digital security threats may occur at a much faster rate than individuals have previously been used to when it comes to personal security. As the threats are not about physical well-being, there may be a temptation to take them less seriously as they may not be a direct danger to an individual’s personal physical safety. There may also be a misunderstanding about what exactly the threat is. This is in part because individuals may carry over the mental models for physical security to digital security, which lead them to have incorrect expectations about the nature and consequences of the threats they may face.

However, it is important to take digital security just as seriously as one would physical security. This is particularly relevant in the context of the smart home. There are two primary issues with the introduction of smart devices into the home. The first issue is that the device has ineffective security, for example the device is easily hackable. The second, more concerning, issue is that the device be sufficiently secure, but users are manipulated into allowing manufacturers access to more data than they really want to share.

Case One: A Smart Device is Safe, but Not Secure

In cases where there the security provided is insufficient, an issue occurs because there is a difference between the mental models of the user, and the designer of the device. Users may trust these devices because they trust the company that produces them, the designer of the device, or because they are unaware of what data may be collected. This is often complicated by the fact that many users assume privacy and security are synonymous. Because data may be encrypted and safe from hackers, users may assume that it is confidential as well. This isn’t necessarily the case. Particularly with voice activated virtual assistants like Amazon’s Alexa or Google Assistant, data may be sent to processing centers or may even be inadvertently sent to contacts [4], [5]. When such breaches occur users may feel betrayed by these devices and reluctant to use them because they have built an incorrect mental model of how the device functions.

The separation of security from convenience makes it difficult for the average user to determine how secure a smart device is.

While users are familiar with ensuring the physical security of their homes to protect themselves, digital security is a relatively new concept. Despite education on the topic, users may assume a device is more secure than it is because they may not consider how each individual device connected is in their home, just that the devices within the home itself are secure (that they cannot be easily stolen).

Physical security is easier to contemplate than digital security for the majority of users. It is engrained in society that to prevent break-ins, homeowners need to lock their doors and windows. To do this, they can purchase locks and alarms, which once installed they only need to maintain. The mental model of home security is that once purchased, a lock or an alarm system functions primarily on its own. It does not need to be regularly reviewed or updated and its working condition can be easily checked by sight or touch. This model can be incorrectly carried over to digital security.

Digital security requires constant vigilance. The technology is newer and is constantly changing, so users of smart devices need to regularly check for security breaches and updates. Each device provides a potential entry point into a user’s home network, and the fact that all devices are interconnected means that without sufficient security for each device, any one of them can be used to access information held in other parts of the system. This is particularly a problem when users introduce smart appliances such as vacuum cleaners that have not previously been considered a security threat. These devices are new to the market, readily available, and cheap. As a result they may not necessarily be secure [8].

The most obvious way to address this is to ensure that Internet service providers have sufficient security on the server side. However, the end user and purchaser of smart home devices has no control over this. What they do have control over is ensuring that each device introduced into the home has sufficient security protocols in place. However, this may not always be possible. In cases where users want, or need, to introduce a new device for which they are unable to determine the security level, one potential solution is to introduce a surveillance system for the smart devices that will monitor their security status on behalf of the user.

One such example is the GHOST system which is currently being developed as part of the EU Framework Program for Research and Innovation Horizon 2020 [9]. While users cannot guarantee the security of individual smart devices, they can use the GHOST system to continuously monitor and provide feedback on the security status of their devices. This is the digital equivalent of turning the alarm on. The GHOST system can then provide alerts on a mobile application that identifies the source and severity of the potential security threat, for example “The motion sensor is trying to communicate with hack-me.com. This is not a regular behavior and the communication was blocked.” This alert simplifies the constant vigilance needed by users. By using GHOST they can outsource the system checks and only act on individual threats. For example, if a device is identified as potentially not being secure, they can take further action by removing it from the system, upgrading its security software, contacting the manufacturer, and so on.

Case Two: A Smart Device Is Secure, but Users Have a Lack of Control Over Data

Regarding the second issue, where a smart device may be sufficiently secure, but users are manipulated into allowing manufacturers access to more data than they want to share, introduction of the General Data Protection Regulation (GDPR) has significantly helped to regulate data control, particularly with privacy and security [10]. It is good as a method of control, especially when it lets users easily understand and control their data. But in reality, it is difficult to gauge how aware users are of what is happening to their data either because they have not been able, or do not understand, the agreements they make with technology companies. GDPR has put a large onus on companies to delete and control data that they might have previously been making a profit from. It is therefore not in their interest to allow users to opt out of sharing their personal data. As a result, they resort to manipulating users in order to keep their data.

An example of this is Facebook. Facebook, which previously has been caught collecting more data than it informed its users of, has now resorted to using dark patterns of user experience (UX) design in order to comply with GDPR and still collect the data they want. Dark patterns are defined as situations where “designers use their knowledge of human behavior (e.g., psychology) and the desires of end users to implement deceptive functionality that is not in the user’s best interest” [11]. This can take the form of hiding options, adding last minute purchases or costs to shopping baskets or using trick questions to convince users to choose options that they might not otherwise have chosen.

Facebook is asking users to allow the use of facial recognition software on uploaded photos. In order to opt out, users must navigate through three pages of information before they can do so. According to Facebook, the motivation for this is to allow Facebook to detect which pictures their users are in and therefore protect them from strangers using their photos [12]. Facebook Artificial Intelligence Research (FAIR) has recently unveiled new technology that is able to build 3D models from 2D images, with one of the examples being a 3D model of a person from a photograph [13], [14]. Between these two articles, it is not implausible to assume that FAIR could use the images collected off the Facebook platform to develop their human models and use them for other purposes outside of user security. It is not implausible to consider that this 2D to 3D technology, and potentially user images, could be sold on to, or shared with, advertising companies. This then links back again to the first case. In an extreme scenario, providing user consent has been given, online shoppers could see their face on the bodies of models, allowing them to estimate how clothes could fit them without trying them on. This is convenient, but the data transferred may not necessarily be secure.

However, this hypothetical scenario is a long way off. Usage of dark patterns in UX design is not just specific to Facebook. Dark patterns are easy to implement and are widespread. They can be frustrating, but at the moment the only way to combat them, providing that users want to use the services provided by the designer, is to persevere through the options, reading carefully. Realistically, many users will not do this. The amount of effort needed to counteract dark patterns for every website and service would lead many users to just circumnavigate the issue and simply click “accept.” Which is exactly what the designers of such interfaces hope for.

What happens to collected data once it has been sourced from the end user is also important. Whether the data has been hacked from an insecure system or device, such as in case one, or whether the user has been manipulated into giving up more data than they intended, such as in case two, the end result is the same. The sourced data can be repurposed by the companies who collected it, or sold on to third parties, and the user does not have control over this, or in some cases even knowledge of it. Any precaution implemented at the user end can only limit the amount of data collected. For proper security, and privacy, these issues need to be addressed by the manufacturers of smart devices and the designers of their interfaces.

In a smart home scenario, instead of having to rely on the user to check the security level of the smart devices in their home, the onus should be on the manufacturer. Value sensitive design [15] and design contractualism [16] are ways to protect users from unwanted consequences of using technology. GDPR is an example of ensuring data users and manufacturers of technology implement privacy by design, and it goes some way towards data security. What is needed however is a level above this. Ethical designs with the user’s best interest in mind are needed — avoiding dark patterns and offering users simple straightforward options for their security (and privacy). Without regulation to this effect, it is unlikely any commercial organization would adhere to these ethical designs if they went against the company’s interests. For example, if a commercial organization needed, or wanted, extra data for testing new algorithms or for user profiling, they will not to be designing technology that allows users privacy and security by default. The Facebook example is typical of the way a company might adhere to the letter of the GDPR law, but not the spirit of it. More stringent regulation is therefore needed.

Manufacturers need to put their users first and consider how secure their devices are — not just in terms of encryption or pseudonymization when necessary, but always. Devices need to be designed with the concept of security in mind, to answer not just the question “Can someone be identified by this data?” but “Can this data be accessed?” and “Would the user want to share this data?” Given the speed and diversity of technology development, it is not always immediately obvious what purposes seemingly innocuous data can be used for in the future. It is necessary to discuss and implement safeguards to protect users today from the potential threats of tomorrow.

ACKNOWLEDGMENT

A shorter version of this article appeared on the GHOST blog [17]. The author is funded by EU Grant Agreement number GA-740923.

Author Information

Kristina Milanovic is with Imperial College London, United Kingdom. Email: km908@ic.ac.uk.

 

To read the full version of this article including references, click HERE.