Social media companies have intentionally created platforms that actively spread disinformation. What can we do to protect our society against disinformation? A good place to start would be limiting how large and powerful these social media platforms can get.
If it were possible to formulate laws involving vague predicates and adjudicate them, what would be the implications of such minimalist formulations for soft laws and even for “hard” laws? The possible implications are threefold: 1) does possibility imply desirability; 2) does possibility imply infallibility; and 3) does possibility imply accountability? The answer advanced here, to all three questions, is “no.”
Ethical diversity refers to “diverse beliefs … as to what are the most ethically appropriate or inappropriate courses of actions,” and takes into account the different values and beliefs people hold . This diversity is and has always been a source of confusion and conflict, from the personal to the international. The answer, however, is to have forums to debate and discuss the ethical choices embedded in everyday life, not algorithms that render the choice being made invisible.
Discrimination is “embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not.”
How do we ensure that tools such as machine learning do not displace important social values? Evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate.
Developers face a conundrum when launching software that must be equipped to make a moral judgment. Algorithms are being programmed to make consequential decisions that align with laws and moral sensibilities.
At the IEEE 2016 Conference on Norbert Wiener in the 21st Century, held in Melbourne, Australia, July 13–15, 2016, Keith… Read More