There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.
Examining how face recognition software is used to identify and sort citizenship within mechanisms like the Biometric Air Exit (BAE) is immensely important; alongside this, the process of how “citizen” and “noncitizen” is defined, as data points within larger mechanisms like the BAE, need to be made transparent.
Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies. The idea that technology leads development is not necessarily true, for example, con-sider AI. It has been a topic of inter-est to researchers for decades, but only recently has the funding caught up, matching the motivation and enabling the development of AI-ori-ented technologies to really take off.
As we work to decouple carbon emissions and economic growth on the path to net zero emissions — so-called “clean growth” — we must also meaningfully deliver sustainable, inclusive growth with emerging technologies.
With more than 50% of the global population living in non-democratic states, and keeping in mind the disturbing trend to authoritarianism of populist leaders in supposedly democratic countries, it is easy to think of dystopian scenarios about the destructive potentials of digitalization and AI for the future of freedom, privacy, and human rights. But AI and digital innovations could also be enablers of a Renewed Humanism in the Digital Age.
While many of us hear about the latest and greatest breakthrough in AI technology, what we hear less about is its environmental impact. In fact, much of AI’s recent progress has required ever-increasing amounts of data and computing power. We believe that tracking and communicating the environmental impact of ML should be a key part of the research and development process.
Disruptions can have positive as well as negative impacts on natural and human systems. Among the most fundamental disruptions to global society over the last century is the rise of big data, artificial intelligence (AI), and other digital technologies. These digital technologies have created new opportunities to understand and manage global systemic risks.
Two major forces are shaping the future of human civilization: anthropogenic climate change and the digital revolution. The changing climate is driving systemic shifts that threaten to destabilize the health and wellbeing of humankind and the natural systems on which they depend.
It is important to discuss both the potential and risks of machine learning (ML) and to inspire practitioners to use ML for beneficial objectives.
Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies.
As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.
“Nudging” is the term used in the IEEE standards work on Ethics for AI Design. An AI system that applies deep learning to manipulating human decisions, with detailed analysis of the targeted individual, is a disturbing potential that must affect our trust in both the systems and those that direct their applications.
Discrimination is “embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not.”
What sense of worth and dignity can a person have when their daily activities are confined within systemic contraptions where personal input, originality, and initiative are either undesirable, or quantified as targets to be maximized?
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
How do we ensure that tools such as machine learning do not displace important social values? Evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate.
How does your culture view the potential for AI?
We are asking for AI rationale that can be used to improve operations, or attribute liability. This effort is doomed to failure, and may lead to greater problems.
One result of increased AI integration will be increased empathy for robots. This transformation has potential upsides and risks.
“Why would a Russian oil company want to target information on American voters?” Chris asks in the article. Cambridge Analytica claims to have 4000-5000 data points on 230,000,000 U.S. adults.