As autonomous and intelligent systems become more and more ubiquitous and sophisticated, developers and users face an important question: How do we ensure that when these technologies are in a position to make a decision, they make the right decision — the ethically right decision?
It's a complicated question. And there’s not one single right answer.
But there is one thing that people who work in the budding field of AI ethics seem to agree on.
"I think there is a domination of Western philosophy, so to speak, in AI ethics," said Dr. Pak-Hang Wong, who studies Philosophy of Technology and Ethics at the University of Hamburg, in Germany. "By that I mean, when we look at AI ethics, most likely they are appealing to values ... in the Western philosophical traditions, such as value of freedom, autonomy and so on."
Wong is among a group of researchers trying to widen that scope, by looking at how non-Western value systems — including Confucianism, Buddhism and Ubuntu — can influence how autonomous and intelligent designs are developed and how they operate.
The work is part of a larger effort by engineering association IEEE (Institute of Electrical and Electronics Engineers), which released the researchers' recommendations in its latest “Ethically Aligned Design” report. As part of the effort to create a set of ethical standards that consider non-Western value systems and ethical traditions, the organization has also solicited feedback from people around the world and is looking to incorporate those comments into upcoming versions of the report.
"We're not providing black-and-white answers," said Jared Bielby, who heads the Classical Ethics committee tasked with some of this work. "We're providing standards as a starting place. And then from there, it may be a matter of each tradition, each culture, different governments, establishing their own creation based on the standards that we are providing."