The issue of air pollution is a “wicked problem” — complicated by incomplete knowledge, both within the scientific community and among various stakeholders.
One of the major ways in which the development of self-driving cars has been discussed — the levels of automation drawn up by the Society of Automotive Engineers (SAE) — is misleading. A typology originally developed to provide some engineering clarity now benefits technology developers far more than it serves the public interest.
With techno-feudalism, what is paid and permitted in a digital space is decided by asymmetric power, not mutual consent. Political approval for funding priorities, education programs and regulation all favor Big Tech.
Will We Make Our Numbers? The year 2020 has a majority of the planet asking the simple question: “How do we stay alive? Competition is not working for the long-term sustainability of human and environmental well-being.
As we work to decouple carbon emissions and economic growth on the path to net zero emissions — so-called “clean growth” — we must also meaningfully deliver sustainable, inclusive growth with emerging technologies.
With more than 50% of the global population living in non-democratic states, and keeping in mind the disturbing trend to authoritarianism of populist leaders in supposedly democratic countries, it is easy to think of dystopian scenarios about the destructive potentials of digitalization and AI for the future of freedom, privacy, and human rights. But AI and digital innovations could also be enablers of a Renewed Humanism in the Digital Age.
While many of us hear about the latest and greatest breakthrough in AI technology, what we hear less about is its environmental impact. In fact, much of AI’s recent progress has required ever-increasing amounts of data and computing power. We believe that tracking and communicating the environmental impact of ML should be a key part of the research and development process.
Disruptions can have positive as well as negative impacts on natural and human systems. Among the most fundamental disruptions to global society over the last century is the rise of big data, artificial intelligence (AI), and other digital technologies. These digital technologies have created new opportunities to understand and manage global systemic risks.
Some collective behavior that supports sustainability entails some individual inconvenience: many small acts of environmental kindness require some thought, effort, or consideration.
Security threats to smart devices are not just from hacking, but also from a lack of control over data access. The separation of security from convenience makes it difficult for the average user to determine how secure a smart device is.
Mega-platforms have, with the addition of one extra ingredient, combined lock-in and loyalty to create a grave, and perhaps unexpected, consequence. The extra ingredient is psychology; and the unexpected consequence is what might be called digital dependence.
Democracy itself is under (yet another) threat from deepfake videos … deepfake videos could be used to create compromising material of politicians: for example, the digitally-altered video2 of U.S. House of Representatives speaker Nancy Pelosi appearing to slur drunkenly was viewed millions of times and tweeted by the U.S. President, and although the video is demonstrably a hoax, the tweet remains undeleted.
Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies.
As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.
It is important to define autonomy in technology, which is not the same as automation. Automated systems operate by clear repeatable rules based on unambiguous sensed data. Autonomous systems take in data about the unstructured world around them, process that data to generate information, and generate alternatives and make decisions in the face of uncertainty.
Technology for Big Data, and its brother-in-arms Machine Learning, is at the root of, and is the facilitator of, deliberate string-pulling design choices. These design choices are made by people, and so the question actually becomes, do the design choices enabled by Big Data and Machine Learning have the capacity to alter, diminish and perhaps actually “destroy” what it means to be fundamentally human.
Why would anyone own, or even need to own, a driverless car, if they do not get to drive it? Which in turn begs the question, if the central tenet of the personal car ownership model (i.e., ownership) no longer holds, then what is the replacement business model?
It is essential not only to estimate the sales potential of driverless cars, but also to debate how they will affect the society and cities’ livability.
Meeting traveler’s expectations, and properly exploiting available transport resources, is becoming a more and more complex task.
The role of driverless cars in future transport systems remains debatable, in terms of their potential to replace other transport modes or have a novel, unique, and complementary functionality.