The nuclear anxiety of the Cold War now seems quaint. While speculative writers of the late 20th-early 21st centuries have largely relegated nukes to the past, the situation at San Onofre reminds us of our sins — of assuming the future would take care of the future. The U.S. Nuclear Regulatory Commission enabled this consensual hallucination. Did it take climate change into consideration?
If you are an undergraduate student interested in examining the social implications of technology, submit your work for a chance to publish your work and win cash prizes!
Reflective thinking allows humans to examine the past with intentionality, learn from what happened, and adapt accordingly. We explore thoughts, feelings, and actions, mine out insights, and enhance awareness.
Register Today for ISTAS 2021!
Morris’s book is difficult to read, not only because it is written in reverse chronological order, but because he does not understand the technology he is writing about.
The fiercest public health crisis in a century has elicited cooperative courage and sacrifice across the globe. At the same time, the COVID-19 pandemic is producing severe social, economic, political, and ethical divides, within and between nations. It is reshaping how we engage with each other and how we see the world around us. It urges us to think more deeply on many challenging issues—some of which can perhaps offer opportunities if we handle them well. The transcripts that follow speak to the potency and promise of dialogue. They record two in a continuing series of “COVID-19 In Conversations” hosted by Oxford Prospects and Global Development Institute.
Tuesday, August 10 7:30 pm – 10:45 pm USA Eastern Time (Wednesday Aug 11 9:30 a.m.-12:45 pm Australian Eastern Time)
Webinar: Emerging Location-based Services and Technologies, GeoSurveillance and Social Justice Issues
Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
When we see a built world, we tend to take its permanence and stability for granted. For those who have chosen coastal homes, that built world goes back at least 50 years, with few residents ever realizing that oceans, lakes, and rivers are living entities constantly in motion. The average person relies upon experts such as architects and civil engineers, and supposed guardrails such as state building codes and homeowner associations, to assess safety when purchasing property. But the 21st-century assumption that the built world is stable is a risky bet. Especially in “business-friendly” states.
Arizona continues to build, build, build, and instead of requiring new residents to adapt to the climate, city governments and developers market the very bad idea that the desert can be made green, and thus more desirable.
Unintended consequences of technological development matter in practice and thus are not just of academic interest. SSIT would do well to spark constructive and practical discussion about managing unintended consequences.
The 21st Century Norbert Wiener Conference with the theme: “Being Human in a Global Village” is the third in a series of conferences initiated by the IEEE Society on Social Implications of Technology (SSIT), following events in Boston (2014) and Melbourne (2016).
Just as the “autonomous” in lethal autonomous weapons allows the military to dissemble over responsibility for their effects, there are civilian companies leveraging “AI” to exert control without responsibility.
And so we arrive at “trustworthy AI” because, of course, we are building systems that people should trust and if they don’t it’s their fault, so how can we make them do that, right? Or, we’ve built this amazing “AI” system that can drive your car for you but don’t blame us when it crashes because you should have been paying attention. Or, we built it, sure, but then it learned stuff and it’s not under our control anymore—the world is a complex place.
ISTAS 2021 will be jointly hosted by the University of Waterloo and the University of Guelph (Ontario, Canada) in October 28-31, 2021. Submission Deadline July 13, 2021
The COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without.
We celebrated AI for mental health equity when access is augmented for marginalized populations. We applauded AI as a complement to current services; practitioners would be less overtaxed and more productive, thereby serving vulnerable populations better.
The public’s faith in science and technology has never been higher. Computer “apps” that explore things such as the frequency of, and point of origin of, COVID-related Google search terms, and Twitter posts, are being used to trace the progress of the virus and to predict the sites of further outbreaks. The United States has been roiled by the death, at the hands of the police, of George Floyd. Floyd’s killing was captured by an app that has been circulating throughout the globe that has acquired the near iconic power of the crucifixion. With the majority of the American people equipped to make audio–visual recording of police brutality and post them on social media, we expect that crimes such as this will certainly diminish.
https://21stcenturywiener.org/ 22-25 July 2021, Chennai, INDIA N R Narayana Murthy to present Opening Speech on 22 July 2021. Infosys co-founder… Read More
Open technology communities are loosely organized, volunteer, online groups, focused on development and distribution of open or free software and hardware. “Hacking Diversity:The Politics of Inclusion in Open Technology Cultures” is a study of the efforts of open technology communities to “hack” the issues around the lack of diversity that pervades not only their volunteer communities, but also their related disciplines at large.
There is huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, AI can also drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind.