The Thorny Issue of Programming
Moral Standards in Machines
We are all familiar with the textbook moral dilemma: A car is driving on a road when a child darts in front of it. If the car swerves in one direction, it will hit a car in the oncoming lane. If it swerves in the other direction, it will hit a tree. If it continues forward, it will hit the child. The car is traveling too fast to brake. Each decision may result in death. While the scenario is an extreme case, drivers make life-and-death decisions on a daily basis. To an extent we have laws, rules, and etiquette to guide our driving. In other situations, a driver must rely on their internal moral compass. But what if a human driver is not making the decision, and an autonomous machine is? The scenarios present difficult choices and weighty decisions for humans, let alone for car manufacturers, developers, and engineers who must design machines with such decision-making capabilities.
The concept of morality is thought to be a distinctly human construct, developed through social values and codes as well as an individual’s experience.
If you believe the car manufacturers, tech titans, and the U.K. government, we can expect an exponential increase in driverless vehicles on our roads over the next five years. Driverless cars are already being tested on U.K. roads. This means that, inevitably, algorithms must be programmed to make consequential decisions in a manner that aligns with current laws and, where laws do not exist, our moral sensibilities. The authors of the AI Now report [1] state, “AI does not exist in a vacuum.” It is deployed in the real world and has the potential to cause tangible and lasting impact. The driving scenario illustrates the conundrum developers face when launching software that must be equipped to make a moral judgment. Can they be expected to accurately pre-program moral-based decisions into autonomous machines? If so, whose sense of morality should prevail? There may be ethical dilemmas, lack of harmonized views, as well as bias that come into play. Programming moral standards into algorithms remains one of the thorniest [2] challengesfor AI developers.
Moral standards are transient and far from absolute. Moral inclinations [3] may be conditioned on cultural norms. Cultural norms are neither universal nor immutable. Societal values and standards vary over time and geography. Acceptance of premarital sex, women in combat, homosexuality, and bans on slavery illustrate the seismic shift in values some societies have experienced in recent decades. If a software developer in the U.S. programs an autonomous vehicle to prioritize the safety of the passenger over animals on the road, could the outcome of the decision made by the same autonomous machine deployed in India be considered amoral where cows are considered holy and have right of way on roads? Should companies developing autonomous vehicles prioritize commercial gain over a utilitarian concept of safety? Research [4] indicates that buyers are less likely to purchase a car that prioritizes the safety of others over
the occupants.
In the near future, more and more of us will be faced with the consequences of autonomous decision-making machines.
Moral decision-making is highly subjective and individualized. It is contextual, specific to the facts, and personalized depending on the experience, bias, and understanding of the facts of the person making the moral judgments. Individuals will react differently to reports that a Texan was not indicted for bludgeoning another man to death. If we learn that evidence suggests that the victim was raping a female, we may alter our position. When we learn that the man who committed the murder was the father, as was the case here, and the female was his five-year old daughter, the outcome may sit comfortably with our own moral compass. On the specific facts, the act of murder was defensible by State law [5] and the father called 911 in an attempt to save his daughter’s rapist’s life. The death was legally and, some may argue morally, justified.
Morality is a nebulous concept. It is best evidenced [6] by the reaction of an individual when faced with a choice requiring quick action. Such decisions are based on our own internal programming and made in split seconds. They are not always rational or logical — for example, a decision to jump in to save a drowning child at the risk of one’s own safety or return to a burning building to save a family pet. Humans are opaque, flawed, and sometimes make bad judgments. With hindsight, we can analyze actions and interrogate after the fact. The truth, however, is that morality remains the ultimate black box. The same individual in the same situation except for one variant may make a different decision. Given the mutable and individualized nature of morality, is it feasible that a programmer can develop software with acceptably harmonized moral standards?
Morality is a uniquely human concept. Unlike other life forms, humans are considered to possess the singular capacity to judge their own actions and those of others. Helen Guldberg [7] writes that humans are not born with this ability but are conditioned to consciously make moral choices. While scientists have found that some animal species exhibit signs of a moral system, the concept of morality is thought to be a distinctly human construct, developed through social values and codes as well as an individual’s experience. The conundrum for software developers is that while AI technologies can equal and surpass human notions of computational intelligence (for example, Google’s Alpha Go champion), morality remains a solely human domain. While machines, in particular, humanoids [8], may appear to possess reflective thoughts, the output is, at least with present technologies, a result of the data inputted, the rules used to train the algorithms, supervision, and iteration to achieve a desirable outcome.
This is more than an academic debate. Today, AI systems are deployed in a plethora of everyday decision-making scenarios, including autonomous machines, insurance premium settings, and recruitment practices. Companies deploying algorithms that make consequential decisions are hiring philosophers and social science majors to grapple with these quandaries. It is certain that, in the near future, more and more of us will be faced with the consequences of autonomous decision-making machines (trains, trucks, cars, buses) in our daily commute or school run. An autonomous machine must be equipped to make life-and-death decisions in a manner consistent with the law or, in the absence of laws, acceptable social norms. Adopting an engineer’s problem-solving mindset, the solution would be to test the decision-making process among a sample group, determine the most frequently selected response, and train the algorithm accordingly. There are, however, ethical implications with imposing someone else’s version of morality on another as well as the statistical probability that some of the time, the algorithm will produce an output that does not align with our expectations. The algorithms will be trained on the individual or collective bias of the sample group. The issue is that morality is deeply personal. In start-up companies deploying drones, for example, the founder’s morals steer influences whether to deploy a machine for global good (weather, agriculture, or rescue missions) or to aid particular nation-states’ defense, surveillance, or border control strategies. As consumers and as a society, are we prepared for a software developer, or a company, to be the guardian of our moral sensibility?
AI has the potential to greatly improve the human condition. While there may be arguments against the deployment of certain machines and uses of AI technologies, such as facial recognition in autonomous weapons, the underlying technology itself is neutral. Moral considerations must extend before deployment to the design and development of autonomous machines and AI techniques. Ethics should be a core subject in science, business, and engineering course curricula. Standards, both internal for companies developing technologies and external for industries where such technologies are deployed, will assist in setting governance frameworks and best practice. Data used to train the algorithm must be clean, correctly labeled, and reflective of a diverse and inclusive user base. Robust testing must be carried out prior to deployment and companies should have the processes to demonstrate, albeit internally, that the ethical implications of algorithmic decisions have been sufficiently contemplated and mitigation steps put in place.
The transitory and subjective nature of moral inclinations requires ongoing evaluation [9] and iteration of the algorithmic training to ensure that the output continues to resonate broadly with societal norms. Humans, however, are fallible, and morality is a human construct that is subject to change. Despite an engineer’s best efforts to train and test an algorithm prior to release, there may be edge cases in which the outcome affronts our (individual and collective) moral principles. As a society, we are generally accepting of human error. We are less forgiving of technology. When presented with hard, ethical dilemmas, is it (morally) justified for humans to demand a higher standard of a mere machine?
Author Information
Krishna Sood is a senior lawyer and an expert advisor to the U.K.’s All Party Parliamentary Group on Artificial Intelligence. The views expressed in the article are the author’s own.
To access this complete article, including references, click HERE.