As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.
“Nudging” is the term used in the IEEE standards work on Ethics for AI Design. An AI system that applies deep learning to manipulating human decisions, with detailed analysis of the targeted individual, is a disturbing potential that must affect our trust in both the systems and those that direct their applications.
What sense of worth and dignity can a person have when their daily activities are confined within systemic contraptions where personal input, originality, and initiative are either undesirable, or quantified as targets to be maximized?
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
How do we ensure that tools such as machine learning do not displace important social values? Evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate.
Developers face a conundrum when launching software that must be equipped to make a moral judgment. Algorithms are being programmed to make consequential decisions that align with laws and moral sensibilities.
How does your culture view the potential for AI?
We are asking for AI rationale that can be used to improve operations, or attribute liability. This effort is doomed to failure, and may lead to greater problems.
“Why would a Russian oil company want to target information on American voters?” Chris asks in the article. Cambridge Analytica claims to have 4000-5000 data points on 230,000,000 U.S. adults.
Skilling-up for an AI-powered world involves more than science, technology, engineering and math. As computers behave more like humans, the social sciences and humanities will become even more important. Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.
Prior to 2016 there was little press with occasional hype about artificial intelligence. Somewhere in the last two years we… Read More
I have an expectation that machine consciousness will emerge unexpected, unsought, and perhaps undetected.
Web artificial intelligence (AI) evolution is driven, in part, by the evolution of the web. Daniel Dennett, in his recent… Read More
A Guest Blog Post from: Victoria A. Hailey, CMC & Katherine Bennett, (standards development leaders in IEEE). On 28 September 2017,… Read More
The next generation of socio-technical system can be seen as a kind of “focal point” for the convergence of a number of current trends in computing, information systems, and information technology. These trends include the technology-driven instrumentation of infrastructure by ubiquitous computing and/or “intelligent” devices, with the prefix “smart” now taking precedence over the prefix “e-,” i.e. SmartGrids, SmartCities, SmartMotorways, etc., rather than the e-commerce. e-health, e-learning initiatives commonplace at the turn of millennium.
What are your values? I’m asking you, the reader. As a quick exercise, try to write down ten values you… Read More
At the IEEE 2016 Conference on Norbert Wiener in the 21st Century, held in Melbourne, Australia, July 13–15, 2016, Keith… Read More
Heather M. Roff and Peter W. Singer  strikingly identify a problem for the next American president in the form… Read More
An Interview with Metropolitan Kallistos Ware Born Timothy Ware in Bath, Somerset, England, Metropolitan Kallistos was educated at Westminster School… Read More
Mary Catherine Bateson in an interview from the 2014 Norbert Wiener Conference in Boston. In the early part of… Read More