At the IEEE 2016 Conference on Norbert Wiener in the 21st Century, held in Melbourne, Australia, July 13–15, 2016, Keith Miller was asked to suggest questions for a panel convened to discuss the question, “Can we program ethics into AI?” (Panel members were Katina Michael (panel chair), James J. Hughes, Greg Adamson, Thao Phan, and Marcus Wigan.) He wrote a dozen such questions, and they are listed here.
The title “Can we program ethics into AI” has many interesting aspects.
Just for a moment, let’s pretend that under some clear definitions and measurements that the answer to this question is “Yes.” I hope you need not agree with that hypothetical in order to answer a significantly different question: “if we CAN program ethics into AI, SHOULD we?” That is, do you think that it is absolutely established that AI making its own ethical decisions is a good idea?
Returning now to the question “Can we program ethics into AI?” let’s pick apart some of the inherent problems and assumptions inside that question.
Whenever we establish a programming goal, good software engineering principles require us to prepare to test the resulting system to ascertain whether or not that goal has (in all probability) been accomplished. Can you envision a testing procedure that you would find convincing as a demonstration that ethics has been programmed into AI? If you were convinced by the results of such a testing procedure, would you expect that other interested parties will likely be convinced? If you can’t think of any such procedure, do you think the enterprise of attempting to program ethics into AI is still a practical goal?
Do you think the word “ethics” in our central question is a static thing, or a dynamic thing?
That is, if an artificial intelligence exhibits ethics (e.g., ethical behavior or ethical reasoning) today, then should we be confident that the same program will still appear ethical when the AI finds itself in a different setting tomorrow? If “ethics” is not a static thing, then must the programming of AI for ethics necessarily be a dynamic program that can “learn” and adjust? If so, then what figure of merit do you think should be used to automate the process of guiding the learning? If not (that is, the program does not learn and does not adjust over time), then is the resulting “ethics” still valid?
Joanna Bryson has famously said that “robots should be slaves.”
If we program ethics into AI (again, let’s pretend we find that we someday can), will it be reasonable to treat the resulting AI as a mechanical slave? Or should an entity (even a silicon-based entity) that can tell the difference between right and wrong be giving more respect than the status of a slave?
Do you think that an entity being able to tell the difference between right and wrong is a prerequisite for having ethics?
What are your views of how we judge HUMANS as ethical beings?
And how do those views affect your answer to the question, “Can we program ethics into AI?”
Does AI have to achieve ethical behaviors in a manner similar to how humans achieve ethical behaviors in order to declare the AI “ethical?”
Are or should machines be declared to BE ethical as long as their behaviors APPEAR ethical to an outside observer?
If I am a strict utilitarian and you are a strict Kantian deontologist (and we are both humans), then you and I are likely to differ in our judgments about what is the right thing to do in a particular situation.
I am unlikely to declare that you are unethical merely because we come to a different conclusion about a particular case; we just disagree about the question at hand. Are we likely to give a machine programmed to be ethical the same kind of consideration when the machine disagrees with our judgment?
Will reading the machine’s program necessarily give us a clear picture of why the machine made a particular decision?
Note that learning algorithms like neural nets are notoriously difficult for humans to understand once launched.
What do you think is the single biggest hurdle to overcome in the road to programming ethics into AI?
Would you characterize that hurdle as a technical programming problem, or the need to better understand HUMAN ethical reasoning before trying to program that reasoning into a machine?
You have personal thoughts about this question.
But separate from your personal opinion about our question, what do you think the general public, in the main, thinks about the question? What do you think AI researchers, on the whole, think about the question?
Given your opinions on the probability of success, and the value of the goal in the first place, do you think society should actively pursue the goal of programming ethics into AI?
If no, why not? If yes, then how much of a priority (in terms of public funding) should the project be?