Access TTS Volume 4, Issue 2, 2023 – Special Issue on Socio-Technical Ecosystem Considerations: An Emergent Research Agenda for AI… Read More

Access TTS Volume 4, Issue 2, 2023 – Special Issue on Socio-Technical Ecosystem Considerations: An Emergent Research Agenda for AI… Read More
Access Volume 4, Issue 1, 2023 – Special Issue on Designing Ethical AI Using A Human-Centered Approach: Explainability and Accuracy… Read More
Professor Katina Michael of Arizona State University in the School for the Future of Innovation in Society discusses ChatGPT and its implications. From The List Show TV.
This SSIT Guest Lecture was presented by Prof Ali Hessami, Vega Systems, UK at a Chapter Meeting organised by IEEE… Read More
The Special Session “Life Science and its Implications for Society” took place during IEEE International Symposium on Technology and Society (ISTAS) 2021 on… Read More
The Second International Workshop on Artificial Intelligence for Equity (AI4Eq) “Against Modern Indentured Servitude” was organised in association with IEEE… Read More
If it were possible to formulate laws involving vague predicates and adjudicate them, what would be the implications of such minimalist formulations for soft laws and even for “hard” laws? The possible implications are threefold: 1) does possibility imply desirability; 2) does possibility imply infallibility; and 3) does possibility imply accountability? The answer advanced here, to all three questions, is “no.”
This SSIT Guest Lecture was presented by Prof. Andrew McStay, Bangor University, UK at a Joint Chapter Meeting organised by… Read More
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
From the 1970s onward, we started to dream of the leisure society in which, thanks to technological progress and consequent increase in productivity, working hours would be minimized and we would all live in abundance. We all could devote our time almost exclusively to personal relationships, contact with nature, sciences, the arts, playful activities, and so on. Today, this utopia seems more unattainable than it did then. Since the 21st century, we have seen inequalities increasingly accentuated: of the increase in wealth in the United States between 2006 and 2018, adjusted for inflation and population growth, more than 87% went to the richest 10% of the population, and the poorest 50% lost wealth .
Technological determinism is a myth; there are always underlying economic motivations for emergence of new technologies. The idea that technology leads development is not necessarily true, for example, con-sider AI. It has been a topic of inter-est to researchers for decades, but only recently has the funding caught up, matching the motivation and enabling the development of AI-ori-ented technologies to really take off.
With more than 50% of the global population living in non-democratic states, and keeping in mind the disturbing trend to authoritarianism of populist leaders in supposedly democratic countries, it is easy to think of dystopian scenarios about the destructive potentials of digitalization and AI for the future of freedom, privacy, and human rights. But AI and digital innovations could also be enablers of a Renewed Humanism in the Digital Age.
While many of us hear about the latest and greatest breakthrough in AI technology, what we hear less about is its environmental impact. In fact, much of AI’s recent progress has required ever-increasing amounts of data and computing power. We believe that tracking and communicating the environmental impact of ML should be a key part of the research and development process.
It is important to discuss both the potential and risks of machine learning (ML) and to inspire practitioners to use ML for beneficial objectives.
Contemporary and emerging digital technologies are leading us to question the ways in which humans interact with machines and with complex socio-technical systems. The new dynamics of technology and human interaction will inevitably exert pressure on existing ethical frameworks and regulatory bodies.
As technology pervades all aspects of our existence, and Artificial Intelligence and machine learning systems become commonplace, a new era of human-computer interaction is emerging that will involve directing our focus beyond traditional approaches, to span other intricate interactions with computer-based systems.
“Nudging” is the term used in the IEEE standards work on Ethics for AI Design. An AI system that applies deep learning to manipulating human decisions, with detailed analysis of the targeted individual, is a disturbing potential that must affect our trust in both the systems and those that direct their applications.
What sense of worth and dignity can a person have when their daily activities are confined within systemic contraptions where personal input, originality, and initiative are either undesirable, or quantified as targets to be maximized?
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
How do we ensure that tools such as machine learning do not displace important social values? Evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate.