The theme of ISTAS 2024 is the Social Implications of Artificial Intelligence (AI). SSIT invites participation from practitioners in academia, industry, and government who contemplate the impacts of technology on today’s society in the areas of ethics, sustainability, and equity, and who particularly examine social values within the tech industry.
Generative artificial intelligence (AI) is rapidly transforming people’s access to and attitudes toward knowledge. It is an extremely powerful technology, but this transformation presents numerous social, environmental, political, and educational considerations.
Amid a global labor force crisis, we cannot turn a blind eye to technological solutions. However, we must approach them with caution and prudence to avoid exacerbating existing biases.
What role does and can AI play in us being able to enjoy security in our places and spaces? Perhaps we could design technology-enabled spaces for the purpose of strengthening the community and empowering community action.
In our time, it is not mythologies or idols in the place of God, but a new divinity, an “AI-centric” God, which according to some in the transhumanist movement, advocates for the enhancement of the human condition in terms of both its longevity and cognition. The rubrics of a divinatory algorithm would be shaped dependent on one’s philosophical or religious orientation or even all of the wisdom literature merged together.
Professor Katina Michael of Arizona State University in the School for the Future of Innovation in Society discusses ChatGPT and its implications. From The List Show TV.
Having a philosophical road map to what is required, might help those with skills to design intelligent machines that will enable and indeed promote human flourishing.
One can see the emergence of ever more efficient forms of intelligence as networked self-similar patterns that are embedded in the universe at its core, driven as they are by the sustained maximization of entropy as a causal force. As a maximizer of future freedom of action, the very existence of gravity can be viewed as a form of embedded, purposeful, goal-directed form of intelligence.
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government.
AI4Eq — Mark your calendars for Wednesday, Oct 27 from 9 AM – 5 PM (EDT)
An element of the expansion of digital technologies is a shift in Artificial Intelligence (AI) technology from research laboratories into the hands of anyone with a smartphone. AI powered search, personalization and automation are being deployed across sectors, from education to healthcare, to policing, to finance. Wide AI diffusion is then reshaping the way organizations, communities and individuals’ function. The potentially radical consequences of AI have pushed nation states across the globe to publish strategies on how they seek to shape, drive and leverage the disruptive capabilities offered by AI technologies to bolster their prosperity and security.
Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.
There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.
Examining how face recognition software is used to identify and sort citizenship within mechanisms like the Biometric Air Exit (BAE) is immensely important; alongside this, the process of how “citizen” and “noncitizen” is defined, as data points within larger mechanisms like the BAE, need to be made transparent.
Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies. The idea that technology leads development is not necessarily true, for example, con-sider AI. It has been a topic of inter-est to researchers for decades, but only recently has the funding caught up, matching the motivation and enabling the development of AI-ori-ented technologies to really take off.
As we work to decouple carbon emissions and economic growth on the path to net zero emissions — so-called “clean growth” — we must also meaningfully deliver sustainable, inclusive growth with emerging technologies.
With more than 50% of the global population living in non-democratic states, and keeping in mind the disturbing trend to authoritarianism of populist leaders in supposedly democratic countries, it is easy to think of dystopian scenarios about the destructive potentials of digitalization and AI for the future of freedom, privacy, and human rights. But AI and digital innovations could also be enablers of a Renewed Humanism in the Digital Age.