The recently published “Ethically Aligned Design,” a 136-page report by IEEE, boldly goes where no report has gone before, with more than 225 mentions of ethical issues surrounding the development of artificial intelligence and autonomous systems.
The document centers on the consideration of human well-being as the primary goal when designing intelligent machines. That’s in contrast to other reports, like the “One Hundred Year Study on Artificial Intelligence (AI100),” published last year by Stanford. “AI100” stated, “The difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality.” That frightening statement makes clear that the “AI100” authors did not appreciate what the IEEE report’s authors so clearly understand: People are not machines, and machines are not people.
The authors of “Ethically Aligned Design” assume that there are “users” and there are “systems,” and that excellence in design is when users are well served by their systems. That includes providing users with powerful tools that are comprehensible, predictable, and controllable.
Responsibility as a general principle is key when developing automated tools that are intended to help people, whether in the home or factory. The focus on responsibility in the IEEE report puts the emphasis on ethics, values, and human norms. One section focuses on autonomous weapons systems’ functions, such as target selection, attack, and self-defense. I agree that allowing for human intervention when making such decisions as who to kill and who not to, is the ethical route.
The report clearly states, “Such systems created to act outside of the boundaries of ‘appropriate human judgment,’ ‘effective human control,’ or ‘meaningful human control,’ undermine core values technologists adopt in their typical codes of conduct.”
I think this report is not about designing strong or weak artificial intelligence systems, but instead taking a look at a post-AI world and the need for human control over such systems. The report authors emphasize that: “We are concerned about a catastrophic loss of individual human autonomy.” And they wisely warn that “some systems may negatively affect human psychological well-being.”
The report’s humanitarian section urges caution: “Overly optimistic advocacy about the positive outcomes competes with legitimate concerns about the emerging individual and institutional harms related to privacy, discrimination, equity, security of critical infrastructure, and other issues.” Another telling line in that section states: “The attempt to implant human morality and human emotion into AI is a misguided one in designing value-based systems.” This clear message serves as a bright-red stop sign that can save implementers from creating dangerous systems.
PATH TO AN AUSPICIOUS FUTURE
If this report were to become the basis for AI design and education, I would be much more hopeful about how AI technology is being developed and applied. I haven’t come across such a transformative document that effectively discusses ways to shape technology in positive ways. It’s amazingly fresh, thoughtful, coherent, and well-documented. And how appropriate that it was developed in collaboration with 17 IEEE committees, with more than 200 members involved! It demonstrates how humans working together, discussing possibilities, and converging on a common goal can deliver visionary thinking that will not be replicated by AI any time soon.
IEEE Fellow Ben Shneiderman is a computer science professor at the University of Maryland, in College Park, and a member of the National Academy of Engineering. He is the author of The New ABCs of Research: Achieving Breakthrough Collaborations, published by Oxford University Press.