The call for responsible innovation is a call to address and account for technology’s short- and long-term impacts within social, political, environmental, and cultural domains. Technological stewardship stands as a commitment to anticipate and mitigate technology’s potential for disruption and especially harm and to guide innovation toward beneficial ends. Dialogue and collaboration across diverse perspectives is essential for developing actionable technological solutions that attend in responsible ways to the evolving needs of society.
All the deep philosophical questions, starts the joke, were asked by the classical Greeks, and everything since then has been footnotes and comments in the margins, finishes the punchline.
“Digital and Societal Transformations” – Conference website here: https://www.istas22.org/
If caregiving is the very essence of being human, why would we consider turning it over to robots? Technology—and artificial intelligence (AI, in particular—have created a world in which automation is prioritized and digital is seen as an improvement on analog—more accurate, more portable, and more controllable. Caregiving is as analog as it gets and it is a field with a serious labor shortage. That makes it ripe for automation—and in fact, the robot caregivers are already here.
Social robotics is poised to impact society by addressing isolation and providing companionship by augmenting human interaction when none is available.
Worldwide, there are 55 million individuals living with dementia and it is projected that by 2050, this number will increase to 139 million. Technological devices and solutions that can benefit the dementia community also carry ethical implications such as privacy and issues of consent. AI-driven LBS solutions may exacerbate the marginalization of individuals living with dementia.
In the first six months of 2018, eight New York City yellow cab drivers, impacted by big tech disruption on the taxi industry, took their own lives. “I am not a Slave and I refuse to be one,” wrote one in his suicide note.
Having a philosophical road map to what is required, might help those with skills to design intelligent machines that will enable and indeed promote human flourishing.
The term “modern indentured servitude” did not originate with this workshop, but we hope that this special issue has highlighted many of the different shapes and processes it can take, some more insidious than others. We would like to think that, if each paper could talk, they would get up one after the other and say, “No, I’m Spartacus.” In these dark times, each of us needs the courage to be Spartacus.
It would be good if whenever a client connected to an http server, or indeed any app connected with a central server, the server responded with a corresponding acknowledgment of data, along the lines of “Before we begin our session this morning, I would like to acknowledge the traditional owner of the data which is being transferred, and respect rights to privacy, identity, location, attention and personhood.”
One can see the emergence of ever more efficient forms of intelligence as networked self-similar patterns that are embedded in the universe at its core, driven as they are by the sustained maximization of entropy as a causal force. As a maximizer of future freedom of action, the very existence of gravity can be viewed as a form of embedded, purposeful, goal-directed form of intelligence.
VIRTUAL CONFERENCE – Hong Kong, November 10-12, 2022 Conference Theme: “Digital and Societal Transformations” – Conference website: https://www.istas22.org
Emerging social contexts add new requirements to the knowledge that successful roboticists need. Much of this additional knowledge comes from the social sciences and humanities.
The Second International Workshop on Artificial Intelligence for Equity (AI4Eq) “Against Modern Indentured Servitude” was organised in association with IEEE… Read More
Access Volume 3, Issue 1, 2022 – Special Issue on Biometrics and AI Bias Current Issue (3, 1) Front Cover Publication… Read More
SSIT members have a history of getting into “good trouble” as they encourage IEEE toward more humanistic stances on ethics, transparency, sustainability, and global equity.
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government.
IEEE 2089™-2021, Standard for an Age-Appropriate Digital Services Framework Based on the 5Rights Principles for Children is the first in a family of standards that establishes a set of processes that helps enable organizations to make their services age-appropriate.
If it were possible to formulate laws involving vague predicates and adjudicate them, what would be the implications of such minimalist formulations for soft laws and even for “hard” laws? The possible implications are threefold: 1) does possibility imply desirability; 2) does possibility imply infallibility; and 3) does possibility imply accountability? The answer advanced here, to all three questions, is “no.”
The promise of 4IR is overblown and its perils are underappreciated. There are compelling reasons to reject—and even actively oppose—the 4IR narrative.