Tuesday 5.30 – 7.00 p.m. (EDT), 2.30 pm – 4.00 pm (MST), Wednesday 7.30 am – 9.00 am (AEST); ;– “IP Location Services and Automated Biometric Recognition”
Tuesday, August 10 7:30 pm – 10:45 pm USA Eastern Time (Wednesday Aug 11 9:30 a.m.-12:45 pm Australian Eastern Time)
Webinar: Emerging Location-based Services and Technologies, GeoSurveillance and Social Justice Issues
Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge—from computer science to law to sociology— none of these fields have been able to resolve the problem on their own terms. We propose a synergistic approach that allows us to explore bias and discrimination in AI by supplementing technical literature with social, legal, and ethical perspectives.
When we see a built world, we tend to take its permanence and stability for granted. For those who have chosen coastal homes, that built world goes back at least 50 years, with few residents ever realizing that oceans, lakes, and rivers are living entities constantly in motion. The average person relies upon experts such as architects and civil engineers, and supposed guardrails such as state building codes and homeowner associations, to assess safety when purchasing property. But the 21st-century assumption that the built world is stable is a risky bet. Especially in “business-friendly” states.
Disease prevention due to successful vaccination is a double-edged sword as it can give the illusion that mass vaccination is no longer warranted. Antivaccination movements are not completely absent throughout history, but for example, most recently, parents have been declining childhood vaccines at alarming levels [2, S9]. Safety concerns and misinformation seem to be at the forefront of these movements.
Join the Student Discussion Forum in association with ASU PIT on IP Location Services and Automated Biometric Recognition!
Unintended consequences of technological development matter in practice and thus are not just of academic interest. SSIT would do well to spark constructive and practical discussion about managing unintended consequences.
Introduction In 2019, IEEE Working Group P7014 began efforts to develop a ‘Standard for Ethical Considerations in Emulated Empathy in… Read More
Over the years IEEE Organisational Units (OUs) have been endeavoring to increase gender diversity of speakers in panels at IEEE… Read More
Abstract Since 2016, drones have been deployed in various development projects in sub-Saharan Africa, where trials, tests, and studies have… Read More
With the century termed one of digital connect from the use of desktops at work, laptops at homes and handy… Read More
For better or worse, we have become familiar with the idea that technologies profile people to deliver a service of… Read More
On that day, at 2:26 p.m., Eastern time, from Cape Kennedy, Lunar Orbiter 1, the first spacecraft to orbit the Moon, was launched. Four days later, at 8:43 a.m., Eastern time, the spaceship successfully entered an orbit around the Moon, becoming the first human-made object to orbit a heavenly body other than Earth.
Originally published in The Engineering Ethics blog, August 6, 2018. In a recent New York Times opinion piece, science journalist Melinda Wenner… Read More
How does your culture view the potential for AI?
Do you want to attract the best people? Give them a problem with a purpose. Give them room to work. Give them recognition for their successes — not just internally, but encouraging them to share these at conferences, or in relevant peer communities.
We are asking for AI rationale that can be used to improve operations, or attribute liability. This effort is doomed to failure, and may lead to greater problems.
One result of increased AI integration will be increased empathy for robots. This transformation has potential upsides and risks.
“Why would a Russian oil company want to target information on American voters?” Chris asks in the article. Cambridge Analytica claims to have 4000-5000 data points on 230,000,000 U.S. adults.
Skilling-up for an AI-powered world involves more than science, technology, engineering and math. As computers behave more like humans, the social sciences and humanities will become even more important. Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.