Having a philosophical road map to what is required, might help those with skills to design intelligent machines that will enable and indeed promote human flourishing.

Having a philosophical road map to what is required, might help those with skills to design intelligent machines that will enable and indeed promote human flourishing.
One can see the emergence of ever more efficient forms of intelligence as networked self-similar patterns that are embedded in the universe at its core, driven as they are by the sustained maximization of entropy as a causal force. As a maximizer of future freedom of action, the very existence of gravity can be viewed as a form of embedded, purposeful, goal-directed form of intelligence.
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government.
AI4Eq — Mark your calendars for Wednesday, Oct 27 from 9 AM – 5 PM (EDT)
An element of the expansion of digital technologies is a shift in Artificial Intelligence (AI) technology from research laboratories into the hands of anyone with a smartphone. AI powered search, personalization and automation are being deployed across sectors, from education to healthcare, to policing, to finance. Wide AI diffusion is then reshaping the way organizations, communities and individuals’ function. The potentially radical consequences of AI have pushed nation states across the globe to publish strategies on how they seek to shape, drive and leverage the disruptive capabilities offered by AI technologies to bolster their prosperity and security.
Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.
There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.
Examining how face recognition software is used to identify and sort citizenship within mechanisms like the Biometric Air Exit (BAE) is immensely important; alongside this, the process of how “citizen” and “noncitizen” is defined, as data points within larger mechanisms like the BAE, need to be made transparent.
Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies. The idea that technology leads development is not necessarily true, for example, con-sider AI. It has been a topic of inter-est to researchers for decades, but only recently has the funding caught up, matching the motivation and enabling the development of AI-ori-ented technologies to really take off.
As we work to decouple carbon emissions and economic growth on the path to net zero emissions — so-called “clean growth” — we must also meaningfully deliver sustainable, inclusive growth with emerging technologies.
With more than 50% of the global population living in non-democratic states, and keeping in mind the disturbing trend to authoritarianism of populist leaders in supposedly democratic countries, it is easy to think of dystopian scenarios about the destructive potentials of digitalization and AI for the future of freedom, privacy, and human rights. But AI and digital innovations could also be enablers of a Renewed Humanism in the Digital Age.
While many of us hear about the latest and greatest breakthrough in AI technology, what we hear less about is its environmental impact. In fact, much of AI’s recent progress has required ever-increasing amounts of data and computing power. We believe that tracking and communicating the environmental impact of ML should be a key part of the research and development process.
Disruptions can have positive as well as negative impacts on natural and human systems. Among the most fundamental disruptions to global society over the last century is the rise of big data, artificial intelligence (AI), and other digital technologies. These digital technologies have created new opportunities to understand and manage global systemic risks.
Two major forces are shaping the future of human civilization: anthropogenic climate change and the digital revolution. The changing climate is driving systemic shifts that threaten to destabilize the health and wellbeing of humankind and the natural systems on which they depend.
It is important to discuss both the potential and risks of machine learning (ML) and to inspire practitioners to use ML for beneficial objectives.
Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies.
As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.