Two major forces are shaping the future of human civilization: anthropogenic climate change and the digital revolution. The changing climate is driving systemic shifts that threaten to destabilize the health and wellbeing of humankind and the natural systems on which they depend.
It is important to discuss both the potential and risks of machine learning (ML) and to inspire practitioners to use ML for beneficial objectives.
Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies.
As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.
“Nudging” is the term used in the IEEE standards work on Ethics for AI Design. An AI system that applies deep learning to manipulating human decisions, with detailed analysis of the targeted individual, is a disturbing potential that must affect our trust in both the systems and those that direct their applications.
Discrimination is “embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not.”
What sense of worth and dignity can a person have when their daily activities are confined within systemic contraptions where personal input, originality, and initiative are either undesirable, or quantified as targets to be maximized?
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
How do we ensure that tools such as machine learning do not displace important social values? Evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate.
How does your culture view the potential for AI?
We are asking for AI rationale that can be used to improve operations, or attribute liability. This effort is doomed to failure, and may lead to greater problems.
One result of increased AI integration will be increased empathy for robots. This transformation has potential upsides and risks.
“Why would a Russian oil company want to target information on American voters?” Chris asks in the article. Cambridge Analytica claims to have 4000-5000 data points on 230,000,000 U.S. adults.
Skilling-up for an AI-powered world involves more than science, technology, engineering and math. As computers behave more like humans, the social sciences and humanities will become even more important. Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.
Prior to 2016 there was little press with occasional hype about artificial intelligence. Somewhere in the last two years we… Read More
Is it unreasonable for us to want more from the AI-inspired — something more than, for example, a robot that can get up off the ground, and recover from being hit with a club?
I have an expectation that machine consciousness will emerge unexpected, unsought, and perhaps undetected.
Web artificial intelligence (AI) evolution is driven, in part, by the evolution of the web. Daniel Dennett, in his recent… Read More
A Guest Blog Post from: Victoria A. Hailey, CMC & Katherine Bennett, (standards development leaders in IEEE). On 28 September 2017,… Read More
Periodically, often after a unconscionable massacre such as Las Vegas or Orlando, the United States reviews the balance between the… Read More