How can local (grassroots) contributive justice be used as a driving force for the common good?

How can local (grassroots) contributive justice be used as a driving force for the common good?
The call for responsible innovation is a call to address and account for technology’s short- and long-term impacts within social, political, environmental, and cultural domains. Technological stewardship stands as a commitment to anticipate and mitigate technology’s potential for disruption and especially harm and to guide innovation toward beneficial ends. Dialogue and collaboration across diverse perspectives is essential for developing actionable technological solutions that attend in responsible ways to the evolving needs of society.
All the deep philosophical questions, starts the joke, were asked by the classical Greeks, and everything since then has been footnotes and comments in the margins, finishes the punchline.
The term “modern indentured servitude” did not originate with this workshop, but we hope that this special issue has highlighted many of the different shapes and processes it can take, some more insidious than others. We would like to think that, if each paper could talk, they would get up one after the other and say, “No, I’m Spartacus.” In these dark times, each of us needs the courage to be Spartacus.
It would be good if whenever a client connected to an http server, or indeed any app connected with a central server, the server responded with a corresponding acknowledgment of data, along the lines of “Before we begin our session this morning, I would like to acknowledge the traditional owner of the data which is being transferred, and respect rights to privacy, identity, location, attention and personhood.”
When faced with an ethical problem such as a conflict of interest in which codes of ethics or available ethical problem-solving methods cannot help us decide upon the moral course of action to take. A method claimed to be helpful in such situations is The New York Times Test.
Emerging social contexts add new requirements to the knowledge that successful roboticists need. Much of this additional knowledge comes from the social sciences and humanities.
By failing to attend to the source, disinformation can be stored along with information, making it difficult to distinguish the good penny from the bad penny.
Functional democratic governance has five fundamental preconditions: civic dignity, confluent values, epistemic diversity, accessible education, and legitimate consent.
This special issue published in cooperation with IEEE Transactions on Technology and Society (December 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. These programs are characterized by the creation of substantive expectations that are not directly enforced by government.
If it were possible to formulate laws involving vague predicates and adjudicate them, what would be the implications of such minimalist formulations for soft laws and even for “hard” laws? The possible implications are threefold: 1) does possibility imply desirability; 2) does possibility imply infallibility; and 3) does possibility imply accountability? The answer advanced here, to all three questions, is “no.”
The promise of 4IR is overblown and its perils are underappreciated. There are compelling reasons to reject—and even actively oppose—the 4IR narrative.
The Web has entered an unfair culture where big tech companies offer free applications in exchange for the right to sell our user-generated content.
PIT acknowledges that technological potential can be harnessed to satisfy the needs of civil society. In other words, technology can be seen as a public good that can benefit all, through an open democratic system of governance, with open data initiatives, open technologies, and open systems/ecosystems designed for the collective good, as defined by respective communities that will be utilizing them.
Systems can be designed using methodologies like value-sensitive design, and operationalized, to produce socio-technical solutions to support or complement policies that address environmental sustainability, social justice, or public health. Such systems are then deployed in order to promote the public interest or enable users to act (individually and at scale) in a way that is in the public interest toward individual and communal empowerment.
The fiercest public health crisis in a century has elicited cooperative courage and sacrifice across the globe. At the same time, the COVID-19 pandemic is producing severe social, economic, political, and ethical divides, within and between nations. It is reshaping how we engage with each other and how we see the world around us. It urges us to think more deeply on many challenging issues—some of which can perhaps offer opportunities if we handle them well. The transcripts that follow speak to the potency and promise of dialogue. They record two in a continuing series of “COVID-19 In Conversations” hosted by Oxford Prospects and Global Development Institute.
Disease prevention due to successful vaccination is a double-edged sword as it can give the illusion that mass vaccination is no longer warranted. Antivaccination movements are not completely absent throughout history, but for example, most recently, parents have been declining childhood vaccines at alarming levels [2, S9]. Safety concerns and misinformation seem to be at the forefront of these movements.
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.