Are Wearables Really Ready to Wear?

By on June 29th, 2017 in Editorial & Opinion, Human Impacts, Magazine Articles

The introduction and announcement of Google Glass on April 4, 2012, changed the landscape of computing for the everyday person. Afterwards, for a full year before the first developers’ release in May of 2013, Glass was a figment of our collective imagination. Google Glass promised a “wearable” eye display for the masses that would revolutionize smartphone culture. Our reaction was both amazement and fear because Google claimed that Glass was to become so intimately intertwined with our daily life. We were struck by Glass’s promises, implying nearly instantaneous digital communication. It would give us direct visual access to our friends, schedules, location information, and the Internet, to name only a few of the tantalizing features. No more clunky phones. In a somewhat more understated way, Glass’s announcement also exuded something of a superhero aura. While partly a fabrication of the marketing material, Glass made us feel as if life would thenceforth be newer, better, cooler, smarter. Simultaneously, we were terrified (and still are!) over the potential for such a technology to rob us of our privacy and dehumanize everyday life.

Have we completely lost confidence in our ability to control the idea of our digital lives?

Are wearables really ready to wear? The hype surrounding the massive 2014 Consumer Electronic Show (CES) celebrated Glass, and many other wearable prototypes with wild enthusiasm, even though many (including Glass) have not yet been released to all consumers. However, what is overwhelming is our general acceptance that Glass is a mass social phenomenon that will take shape. Since its announcement in 2012, Glass has been accepted for better or worse as a given with as much certainty as the setting sun. And it is precisely this blind dedication that is cause for concern. Have we completely lost confidence in our ability to control the idea of our digital lives? Technology will march on seems to be our mantra. Emergent digital media, like Glass, are invented, designed, adopted, and even celebrated before society is able to understand their impact on lives, culture, values, art, politics, privacy, and social practices.

Using human skin as a computer interface in the real is a new frontier.

How do these beliefs about technology enter culture? Western society has fantasized about technologically enhanced human superheroes across, for example, a wide spectrum of film enterprises for a long time. Iron Man’s exoskeleton enhances Tony Stark to the point that he is nearly invincible. The batcave spawns a significant arsenal of gadgets powerful enough to transform Bruce Wayne into Batman. Hal Jordan’s powerful ring makes him not only strong when he is the Green Lantern, but also able to alter the time-space continuum. As mortals, each relies on body-worn technologies to make himself superhuman, better-than-human, or essentially, transhuman. No doubt, the celebration of these characters is very much bound up in the explosion of superhero films we are experiencing with DC comics’ and Marvel’s constant film franchise releases.

However, popular culture is not only entertainment. It reflects the deeply entrenched values that we hold as a society fueling us to accept certain technologies. A salient example is President Barack Obama’s joke during a press release announcing some technological innovations on February 25, 2014, when he said of the U.S. government, “We are building Iron Man” [1]. What could be more compelling?

The popular turn to wearable technologies in particular is yoked to this kind of societal adoration of the “superhuman.” Progress is bound up in a desire to vault humanity’s perceived limitations.

But there is more to this story. Wearable technology grasps for superhero scenarios because it does not have past models that are evocative enough to make sense of this new paradigm. Desktop computers sprang from the idea of typewriters. Smartphones evolved from landline phones of the past hundred years. But wearables seem to always point to The Terminator, Minority Report, and Iron Man for technical explanations and social modeling. There is no precursor to a full-scale wearable computer platform. This condition incites both euphoria and terror. But it also reveals a gap.

I am fascinated by the leaps-of-faith in the field of emergent wearable technology. But I also believe these latest technological developments require much closer exploration. An increasing number of future-promised digital gadgets— from heads-up eye displays to brain-computer interfaces to bionic contact lenses to digital skin technologies —are emerging or being promoted in the mainstream news media. In my book Ready to Wear, I claim “despite the common term human-computer interaction, much technology goes unexplored as to its impact on people and, more specifically, the concepts and embedded meanings that affect humans. [I] analyze the ongoing rhetorical friction between technology that strives to augment aspects of humans and language about technology, which often results in both humanizing and dehumanizing textual constructions” [2]. We need to explore how devices will affect digital life at the design stage and not only rely on policy-making after the fact, when it is too late.

The technology is framed as a mesmerizing future controller, implying that we will use these patches to interact directly with our computing devices, or even surrounding objects if the Internet of Things comes to pass.

Beyond the gadgets, clips, wristworns, and glasses, wearable tech drives toward more natural interaction with applications. Fueled by a goading popular culture, a growing movement exists that encourages us to use the body itself, namely, the skin, as a medium of communication. On October 16, 2012, Wired Magazine writer Hugh Hart reviewed True Skin, a six-minute short film released on Vimeo after it achieved “viral liftoff” within days of release [3]. He writes: “Sexbots, a deadpan antihero, a creepy futuristic black market for implants and a psycho-cyborg cliffhanger [which] all get crammed into True Skin, a dazzling sci-fi short that reportedly has Hollywood going nuts” [3]. The story features a narrator called “Kaye” who is immersed in a vivid transhumanist dystopia with his digital skin and other bodily augmentations (see Fig. 1). The idea tantalizes and terrifies us, but it reveals our desire to explore the paradigm of skin-based computing.

Using human skin as a computer interface in the real is a new frontier. Recently, advances have been made concerning the idea of an artificial digital skin, sometimes called “electronic skin” or “smart skin.” Right now, it is a heavily sensationalized breakthrough heralded as a future device that could “revolutionize patient monitoring” in healthcare situations [4]. Resembling a child’s temporary tattoo, electronic skin can act as a sensor monitoring the body for various muscle activities and vital signs, including brainwaves and heart activity. However, in the same articles that discuss it as a medical breakthrough, it is often touted as a future technology that will be used in our everyday lives, e.g., “Skinlike Electronic Patch Takes Pulse, Promises New Human-Machine Integration” [5]. The technology is framed as a mesmerizing future controller, implying that we will use these patches to interact directly with our computing devices, or even surrounding objects if the Internet of Things comes to pass. We might be able to communicate with others in ways never imagined. We are told that these patches will encourage a physical freedom with no keyboards, wires, mice, or glasses to obstruct interaction. We could hide them completely under clothes providing a surreptitious sense of privacy not afforded by other wearable tech.

And don’t we have a right to hide our computers? But many unanswered questions surround these devices. Will they set up a paradigm that allows people to be surveilled by others on very personal terms? Who will be allowed to monitor our skin to read the internal workings of the body? Will they block unwanted interactions? What are the ramifications of this technological leap-of-faith beyond the healthcare context?

Society clings to the belief that wearable tech is imminent, creating an aura that impedes our control of that emergence.

These questions need to be addressed at the design stage and not at the moment of release.

Society clings to the belief that wearable tech is imminent, creating an aura that impedes our control of that emergence. I argue that sometimes technology can spawn dehumanizing effects while it strives for the opposite. As mainstream media celebrate technology such as Google Glass and so many other new wearable devices, we need to take a much closer look at how they frame our culture, our society, and us. Future proposed technologies march on a path to emergence amid a zeitgeist that implies such a faith in them — but we each need to ask: do I really want to be Iron Man?

Author

Isabel Pedersen is with the University of Ontario, Institute of Technology, Oshawa, Ontario, Canada. Email: Isabel.Pedersen@uoit.ca.