Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
Disease prevention due to successful vaccination is a double-edged sword as it can give the illusion that mass vaccination is no longer warranted. Antivaccination movements are not completely absent throughout history, but for example, most recently, parents have been declining childhood vaccines at alarming levels [2, S9]. Safety concerns and misinformation seem to be at the forefront of these movements.
Unintended consequences of technological development matter in practice and thus are not just of academic interest. SSIT would do well to spark constructive and practical discussion about managing unintended consequences.
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
ISTAS 2021 will be jointly hosted by the University of Waterloo and the University of Guelph (Ontario, Canada) in October 28-31, 2021. Submission Deadline July 13, 2021
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.
The public’s faith in science and technology has never been higher. Computer “apps” that explore things such as the frequency of, and point of origin of, COVID-related Google search terms, and Twitter posts, are being used to trace the progress of the virus and to predict the sites of further outbreaks. The United States has been roiled by the death, at the hands of the police, of George Floyd. Floyd’s killing was captured by an app that has been circulating throughout the globe that has acquired the near iconic power of the crucifixion. With the majority of the American people equipped to make audio–visual recording of police brutality and post them on social media, we expect that crimes such as this will certainly diminish.
Open technology communities are loosely organized, volunteer, online groups, focused on development and distribution of open or free software and hardware. “Hacking Diversity:The Politics of Inclusion in Open Technology Cultures” is a study of the efforts of open technology communities to “hack” the issues around the lack of diversity that pervades not only their volunteer communities, but also their related disciplines at large.
There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.
From the 1970s onward, we started to dream of the leisure society in which, thanks to technological progress and consequent increase in productivity, working hours would be minimized and we would all live in abundance. We all could devote our time almost exclusively to personal relationships, contact with nature, sciences, the arts, playful activities, and so on. Today, this utopia seems more unattainable than it did then. Since the 21st century, we have seen inequalities increasingly accentuated: of the increase in wealth in the United States between 2006 and 2018, adjusted for inflation and population growth, more than 87% went to the richest 10% of the population, and the poorest 50% lost wealth .
Understanding the societal trajectory induced by AI, and anticipating its directions so that we might apply it for achieving equity, is a sociological, ethical, legal, cultural, generational, educational, and political problem.
In 2020, our flagship ISTAS conference and the cosponsored conferences were huge successes, with record attendance, partly because going virtual allowed wider participation.
We can perhaps accept Weil’s starting premise of obligations as fundamental concepts, based on which we can also reasonably accept her assertion that “obligations … all stem, without exception, from the vital needs of the human being.”
Examining how face recognition software is used to identify and sort citizenship within mechanisms like the Biometric Air Exit (BAE) is immensely important; alongside this, the process of how “citizen” and “noncitizen” is defined, as data points within larger mechanisms like the BAE, need to be made transparent.
Public Interest Technology (PIT) is defined as “technology practitioners who focus on social justice, the common good, and/or the public… Read More
Damon Krukowski’s Ways of Hearing does for digital sound what Berger’s Ways of Seeing did for the reproduced image. He wants us to question what we hear, as well as what we’re no longer hearing, in the era of digital audio.
Albright’s book focuses on a group of Americans who live a life of digital hyper-connectivity. Mostly under age 50, this would include what are called Generation X (born between 1965 and 1979), Millennials (born between 1980 and 1999), and their offspring — some, as we have seen, still infants.
Contemporary circumstances in the United States, both in broader politics, recent protest movements around police brutality, and in the demographics of engineering education, have prompted us to look for new ways to bring theory on gender, race, and class to audiences who would not normally consider it their usual reading.
In 2021, Terri Bookman will become SSIT Administrator, and Heather Hilton, IEEE Publications Staff, will be Editorial/Production Associate for T&S Magazine