Innovative Information and Communication Technologies play an important role in e-governance and digital democracy. There is unprecedented opportunity for community collective choice, whereby citizens who are affected by a set of governing rules can help to select policy options and rank spending priorities.
Politics required dialogue, deliberation, negotiation, and compromise. But now there is a dispute over the facts themselves.
Will AI be our biggest ever advance — or the biggest threat? The real danger of AI lies not in sudden apocalypse, but in the gradual degradation and disappearance of what make human experience and existence meaningful.
Given the current lack of regulation, there is nothing in principle to stop unscrupulous organizations from deploying surreptitious robotic olfaction.
As VR has hit the mainstream, much debate has arisen over its ethical complexities. Traditional moral responsibilities do not always translate to the digital world. One aspect we argue is essential to ethical responsibility for virtual reality is that VR solutions must integrate ethical analysis into the design process, and practice dissemination of best practices.
Many recent advances in implantable devices not so long ago would have been strictly in the domain of science fiction. At the same time, the public remains mystified, if not conflicted about implantable technologies. Rising awareness about social issues related to implantable devices requires further exploration.
Technology has provided the source of intrinsically liberating devices, even if a number of them have proved themselves to be lethal. All this is precisely what defines the technological endeavors that constitute the backbone of our civilization.
The time of robotic deception is rapidly approaching. We are being bombarded regarding the inherent ethical dangers of the approaching robotics and AI revolution, but far less concern has been expressed about the potential for robots to deceive human beings.
If digital technologies can be designed to maintain or sustain values, then the same technologies can be designed to manipulate or undermine those same values.
We define “good” technological ideas, as: sound technological designs, developed using participation-based methods, that seek to promote the beneficial uses of technology (through the harnessing of technological potential) while minimizing/potentially eliminating the undesirable effects on individuals and society. These approaches will ideally lead to the development and deployment of practical solutions that fulfill the need(s) of the intended end-user(s) and/or solve a given problem.
As social media serves to transform free speech the world over, a pervasive infiltration of the information highway is underway by individuals and entities using bots and human agencies to invade our privacy and channel extremist, hateful speech in propaganda-like campaigns bent on undermining democratic institutions.
The increasing number of dam projects deployed in developing countries over the last two decades that perform poorly illustrate a disconnect between planners, stakeholders, and technological energy solutions of choice.
It can be glibly asserted that technology makes accomplishing various activities easier. But it is not always obvious for whom it makes it easier to accomplish what. For example, the Internet has had a profound impact on academic publishing, and the transition from printed paper to digital format has ostensibly made it “easier” for academics to put their work in the public domain and, if they can actually get attention in a social-media sound-bite distracted world, reach a wider audience than ever before.
It is necessary to start somehow, even if you’ve got no map, no knowledge of the destination, and no milometer to measure the distance that has been covered. This can sometimes be the essence of collective action for addressing wicked problems. Sometimes human behavior defies top-down direction and even nudge, and begins instead with a single initiating event and snowballs from there.
Is it unreasonable for us to want more from the AI-inspired — something more than, for example, a robot that can get up off the ground, and recover from being hit with a club?
The big issue is the mass scale big data collection strategies using social media intelligence, CCTV, behavioral biometrics using facial recognition and visual analytics to monitor human activities, the keystroke-level tracking of end-users by third parties on Internet websites, the use of in-bound technology devices that conduct ICT surveillance and home monitoring, and even fitness trackers we carry alongside our mobile phone that are set to control our health insurance premiums.
Today, over 90% of U.S. teenagers are online. When it comes to social media, 50% of all teenagers log on at least once a day, with 22% logging on more than 10 times a day. We, like our parents and their parents before them, are worried about the effect that technology is having on the development of our kids. The author discussed the five rules for teaching teens to live with technology responsibly.
The Trump administration cannot simply reject current theories of climate change based on nothing more than that it may conflict with a constituency’s self-interest or one’s sheer lack of understanding.
Australian Aboriginal sovereignty is no longer just about Aboriginal communities retaining rights to their own land. The most brutal types of dispossession are the latest forms of data retention, decreased privacy, and unwarranted use of this personal data as a result of activities being collected, analyzed, and intelligently manipulated by geographically remote entities, all thanks to the Internet.
Unmet local concerns related to renewable energy projects can result in costly project delays or cancellation. Strong political and financial incentives encourage state authorities and renewable energy developers to address issues of social acceptance.