Social media have been seen to accelerate the spread of negative content such as disinformation and hate speech, often unleashing a reckless herd mentality within networks, further aggravated by malicious entities using bots for amplification. So far, the response to this emerging global crisis has centered around social media platform companies making reactive moves
Democracy itself is under (yet another) threat from deepfake videos … deepfake videos could be used to create compromising material of politicians: for example, the digitally-altered video2 of U.S. House of Representatives speaker Nancy Pelosi appearing to slur drunkenly was viewed millions of times and tweeted by the U.S. President, and although the video is demonstrably a hoax, the tweet remains undeleted.
Technology for Big Data, and its brother-in-arms Machine Learning, is at the root of, and is the facilitator of, deliberate string-pulling design choices. These design choices are made by people, and so the question actually becomes, do the design choices enabled by Big Data and Machine Learning have the capacity to alter, diminish and perhaps actually “destroy” what it means to be fundamentally human.
Parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts, who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
Holmes’s idea of inventing a cheap, small, fast, reliable blood-testing system to creatively destroy most of the world’s existing infrastructure for blood tests ran into big problems early on. But with her chutzpah, persuasiveness, and eventually with the help of outright obfuscations and lies, Holmes kept Theranos going until a Wall Street Journal investigative reporter named John Carreyrou responded to a lead by a health-care blogger that something fishy was going on.
Katina Michael, Director of the Center for Engineering, Policy and Society at Arizona State University speaks at TEDxASU 2019 about… Read More
While “Ubering” was acquiring cachet as a verb and as a routine rite of passage for millennials (the heaviest users of the service), the company was besieged by problems. Some came squarely on the back of a general lack of ethics, or care for consequences.
What sense of worth and dignity can a person have when their daily activities are confined within systemic contraptions where personal input, originality, and initiative are either undesirable, or quantified as targets to be maximized?
Innovative Information and Communication Technologies play an important role in e-governance and digital democracy. There is unprecedented opportunity for community collective choice, whereby citizens who are affected by a set of governing rules can help to select policy options and rank spending priorities.
Politics required dialogue, deliberation, negotiation, and compromise. But now there is a dispute over the facts themselves.
Call for Papers – Special Issue of IEEE Technology and Society Magazine – Human Computer Interaction: Regulation and Ethics of Digital Technology
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
Given the current lack of regulation, there is nothing in principle to stop unscrupulous organizations from deploying surreptitious robotic olfaction.
As VR has hit the mainstream, much debate has arisen over its ethical complexities. Traditional moral responsibilities do not always translate to the digital world. One aspect we argue is essential to ethical responsibility for virtual reality is that VR solutions must integrate ethical analysis into the design process, and practice dissemination of best practices.
The level of state surveillance practiced in the supposedly illiberal regimes prior to fall of the Berlin Wall is now routinely accepted, from the widespread use of CCTV to online tracking and data recording. Therefore, instead of labeling a display of genuine concern as “paranoia,” perhaps a lack of genuine concerns should instead be stigmatized by a “disease” or a “disorder”: complacentosis, complyaphilia, complicivitis, ignorrhea.
Dr. Philip Koopman of Carnegie Mellon University received the IEEE SSIT Carl Barus Award for Outstanding Service in the Public Interest on November 13, 2018, in Washington, DC.
This month I will briefly discuss the work of the IEEE Humanitarian Activities Committee, which I have the honor to chair this year.
The time of robotic deception is rapidly approaching. We are being bombarded regarding the inherent ethical dangers of the approaching robotics and AI revolution, but far less concern has been expressed about the potential for robots to deceive human beings.
If digital technologies can be designed to maintain or sustain values, then the same technologies can be designed to manipulate or undermine those same values.
Developers face a conundrum when launching software that must be equipped to make a moral judgment. Algorithms are being programmed to make consequential decisions that align with laws and moral sensibilities.