Robots Don’t Pray

By on June 29th, 2017 in Editorial & Opinion, Magazine Articles, Robotics, Societal Impact

Historically, the evolution of research in the areas of robotics and artificial intelligence (AI) has always been related, although with different intensity through the years. The concept of AI was originally developed around the basic idea of an artificial mind, mainly capable of formal, logical analysis of structured knowledge, and designed to be embedded in computational systems. Until the 1980s, a benchmark for an artificial mind was beating the world’s best chess player, without any need to generate and control movements and forces in the real world. The famous Turing test was about the capability of a computer to give correct, human-like answers to ordinary questions — while being hidden behind a curtain. Then just a few years ago that a computer won a popular television show providing correct answers to a variety of questions directly interpreted from natural spoken language. Actually, Turing himself proposed a variety of visionary scenarios where the body, not just the mind, was part of the process of developing intelligence. Along this line, a milestone for both robotics and AI research domains was the seminal paper by Rodney A. Brooks, “Elephants Don’t Play Chess” [1], which stated that perception could be turned into action without the need of abstract, formal mediation and logical reasoning. It was the genesis of reactive robotics: perception and, progressively, embodiment became the central focus of an increasing portion of robotics research, the basis for the development of machines capable of synthesizing and adapting their behavior in real time to their working environment. In summary, it has taken a while but, eventually, the artificial mind and the artificial body are being considered of equal importance when dealing with the problem of developing human-like intelligent systems.

Of note, in the last decades, there has been increasing attention posed by heterogeneous communities to concepts such as technological singularity (i.e., artificial overcoming human intelligence), artificial life, transhumanism, and immortality. In principle this is in line with the visions about cybernetic systems originally proposed by Wiener, Turing, and others back in the mid of the 20th century. What is new in these old speculations, which were previously confined to science fiction, is that they are now supposed to be systematically pursued by exploiting the expected advancements of AI, robotics, and automation technology for building a new type of agent, featuring different levels of integration between biological and artificial components. For instance, the 2045 initiative (www.2045.com) has developed a roadmap for achieving the implant of a human brain in an artificial body by that year; the Singularity and Humanity+ initiatives are planning to run universities worldwide and sponsor research projects to groups joining the overall philosophy and sharing the vision of immortality enabled by technology, which is in parallel somehow elaborated from a theoretical, transdisciplinary perspective.

To some extent, this seems a kind of new religion: you need to be a follower in order to be part of an effort apparently supported not just by some groups of naïve researchers, but also by large companies and a variety of sponsors that are providing a significant amount of financial and logistic resources currently used to build an increasing network of partners worldwide. To me, all of this effort seems not only quite weak from a scientific and technological viewpoint, but also very dangerous because of the negative and misleading perception that it can generate in society about the ultimate research aims and ethical background of our community.

I believe it is important to raise awareness within SSIT, in the Robotics and Automation Society (RAS), and in IEEE at large about these initiatives. This is perfectly feasible and timely, as demonstrated by the Association for the Advancement of Artificial Intelligence (AAAI), which has recently promoted a dialogue with The Future of Life Institute, another private initiative that promotes research so “to ensure that AI systems are robust and beneficial, doing what humans want them to do” (futureoflife.org). The IEEE Robotics and Automation Society and AAAI cooperation to foster convergence of AI and robotics is ongoing (see [2], [p. 106] for a report on recent activities). I am happy to know that IEEE Technology and Society Magazine is already very active and interested in joining IEEE Robotics & Automation Magazine in this effort.

Authors

Eugenio Guglielmelli is the Editor-in-Chief of IEEE Robotics & Automation Magazine. Presently he is the Head of the Research Unit of Biomedical Robotics and Biomicrosystems at the Campus Bio-Medico University, Roma, Italy. Email: ram-eic@ieee.org.

author photo

Eugenio Guglielmelli