Access Volume 3, Issue 1, 2022 – Special Issue on Biometrics and AI Bias Current Issue (3, 1) Front Cover Publication… Read More

Access Volume 3, Issue 1, 2022 – Special Issue on Biometrics and AI Bias Current Issue (3, 1) Front Cover Publication… Read More
SSIT members have a history of getting into “good trouble” as they encourage IEEE toward more humanistic stances on ethics, transparency, sustainability, and global equity.
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government.
IEEE 2089™-2021, Standard for an Age-Appropriate Digital Services Framework Based on the 5Rights Principles for Children is the first in a family of standards that establishes a set of processes that helps enable organizations to make their services age-appropriate.
If it were possible to formulate laws involving vague predicates and adjudicate them, what would be the implications of such minimalist formulations for soft laws and even for “hard” laws? The possible implications are threefold: 1) does possibility imply desirability; 2) does possibility imply infallibility; and 3) does possibility imply accountability? The answer advanced here, to all three questions, is “no.”
The promise of 4IR is overblown and its perils are underappreciated. There are compelling reasons to reject—and even actively oppose—the 4IR narrative.
Access Volume 2, Issue 4, 2021 – Special Issue on on Soft Law Governance of Artificial Intelligence Current Issue (2,… Read More
Although much research has been devoted to the effects of autonomous vehicles (AVs) on urban areas, little work has been dedicated to the potential impacts of AVs in rural areas, especially related to feasibility and accessibility [1]. How will automated vehicles impact rural communities?
Lethal autonomous weapon systems have the potential to radically transform warfare. Can open source technology help regulate their development?
The Web has entered an unfair culture where big tech companies offer free applications in exchange for the right to sell our user-generated content.
The technologies being investigated may hold a promising future for the elderly population, allowing people to continue to live inside their homes while aging.
AI4Eq — Mark your calendars for Wednesday, Oct 27 from 9 AM – 5 PM (EDT)
Second International Workshop on Artificial Intelligence for Equity (AI4Eq) Against Modern Indentured Servitude
An element of the expansion of digital technologies is a shift in Artificial Intelligence (AI) technology from research laboratories into the hands of anyone with a smartphone. AI powered search, personalization and automation are being deployed across sectors, from education to healthcare, to policing, to finance. Wide AI diffusion is then reshaping the way organizations, communities and individuals’ function. The potentially radical consequences of AI have pushed nation states across the globe to publish strategies on how they seek to shape, drive and leverage the disruptive capabilities offered by AI technologies to bolster their prosperity and security.
Systems can be designed using methodologies like value-sensitive design, and operationalized, to produce socio-technical solutions to support or complement policies that address environmental sustainability, social justice, or public health. Such systems are then deployed in order to promote the public interest or enable users to act (individually and at scale) in a way that is in the public interest toward individual and communal empowerment.
The fiercest public health crisis in a century has elicited cooperative courage and sacrifice across the globe. At the same time, the COVID-19 pandemic is producing severe social, economic, political, and ethical divides, within and between nations. It is reshaping how we engage with each other and how we see the world around us. It urges us to think more deeply on many challenging issues—some of which can perhaps offer opportunities if we handle them well. The transcripts that follow speak to the potency and promise of dialogue. They record two in a continuing series of “COVID-19 In Conversations” hosted by Oxford Prospects and Global Development Institute.
Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.