AI Tipping Point

By on March 9th, 2018 in Articles, Ethics, Human Impacts, Social Implications of Technology

Prior to 2016 there was little press with occasional hype about artificial intelligence. Somewhere in the last two years we reached an AI tipping point. Perhaps triggered by Max Tegmark‘s conferences and recent book Life 3.0: Being Human in the Age of Artificial Intelligence (2017), we are hearing more about AI. I pick on Max because he seems to have started the letter of concern, and more recently the Asilomar Principles for AI research.

IEEE has gotten into the act as well with our “The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.” Appropriately the Society on Social Implications of Technology (SSIT) is hosting some of the standards committees emerging from this effort.

The number of AI “points on the radar” have expanded dramatically. We need to pay attention. At the AI conferences Tegmark has convened, the majority of attendees felt that we would see the emergence of Artificial General Intelligence (AGI) by 2055 (2016 conference). This moved to 2047 by the 2017 conference. AGI would be comparable to human intelligence in terms of flexibility, learning, and competencies spanning many fields. This is different from excelling in a single objective such as playing “Go” or treating cancer.

The “single objective” systems are significant. Playing chess may have global impact, but responding in super-human time in accident prevention, stock trading, or military situations can have real impact, even if the intelligence is not generalized. At some point, even simple AI systems might reach critical AI tipping point effects.

Tegmark’s book challenges major myths about AI, but also asserts that this is the most important issue of our time. His “Future of Life Institute” identifies nuclear war, biotech, and climate change as existential risks we must address, but puts AI first, in part because the above projected dates precede expected dates of high level impacts of climate change.

Nick Bostrom (University of Oxford) asserts that Superintelligence may emerge by 2033, a risk he has been projecting since 1997. Nick is founder of the Future of Humanity Institute, which parallel’s the Future of Life Institute in seeking to address the challenges we need to face, but looks a bit further out than the Future of Life folks.

Many other articles, books, and no doubt engineering results have emerged in the last year, and more are headed our way. We have likely passed a key AI tipping point, and will see the results propagate exponentially. Just ask Siri or Alexa what she thinks.