Designing With Ethical Principles in Mind

IEEE initiative wants human well-being and other values to be considered for AI and autonomous systems

8 May 2017

There’s little doubt that drones, robots, and self-driving cars are poised to transform our lives, possibly on the scale of the agricultural and industrial revolutions. As designers of autonomous and artificial intelligence systems forge ahead, not enough thought is being given to the systems’ unintended consequences, whether good or bad. Although the applications are expected to benefit society, they also are taking away jobs. And there’s the possibility that the systems could be used in unanticipated, detrimental ways.

Concerned that such issues are not being fully addressed, IEEE last year launched its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. An IEEE Standards Association Industry Connections activity, the initiative aims to bring together technologists, ethicists, policymakers, business leaders, and end users to ensure those involved in developing technologies are educated, trained, and empowered to make ethical considerations a priority.

The initiative’s job—not an easy one—is to determine how best to uphold human rights, prioritize the systems’ benefits for humanity and the environment, and mitigate risks and negative impact.

“Our members working on these systems are aware of their possible negative consequences, so it’s natural for IEEE to be sensitive to the ethical issues that could arise,” says IEEE Fellow Raja Chatila, the initiative’s executive committee chair.

IEEE Member John C. Havens, the initiative’s executive director, adds, “The mainstream media tends to polarize the issues related to these systems, reporting that AI and robots will either save the world or destroy it, and an objective viewpoint is sorely lacking. That’s why our initiative is global, open, and inclusive, and welcomes all perspectives.”

The initiative is part of a broader program known as IEEE TechEthics, which provides a platform for addressing ethical and societal implications across a variety of technical areas. IEEE TechEthics was launched with a focus on artificial intelligence and autonomous systems. Over time, it is going to expand to other areas.

The initiative has produced recommendations that address ethical considerations related to designing autonomous systems and has launched several standardization projects.

PROJECTS UNDERWAY

The first document provided by the initiative was released in December. It contains more than 80 so-called candidate recommendations, providing principles and guidelines for the ethical design and implementation of systems that are expected to become pervasive throughout society. More than 100 thought leaders from academia, science, government, and industry in the fields of artificial intelligence, ethics, philosophy, and policy provided input to “Ethically Aligned Design: A Vision for Prioritizing Human Well-being With Artificial Intelligence and Autonomous Systems.” Autonomous and intelligent systems deal with computational intelligence, machine learning, deep learning, cognitive computing, and algorithm-based programs.

The 138-page document covers methods that could guide ethical research and design. Suggestions are presented for defining, accessing, and managing personal data; maintaining control over autonomous weapons; and addressing economic and humanitarian ramifications. Separate committees worked on each of the document’s eight sections. They laid out the issues and candidate recommendations, and presented background for each. They also provided a list of publications that shed more light on the topics.

“These were not simple subjects that one could write a paragraph or two about and be done,” Havens says. “The sections are generally meaty and specific. People can agree or disagree, but at least they have a place to start. Ethically aligned design, along with referring to a rethinking of systems design to provably demonstrate your products or devices honor user values, means listening to other cultures as well.”

Adds Chatila, “The document is really about looking at what we have developed as technologists, and implementing IEEE’s mission: advancing technology for humanity. We’re translating this mission into concrete actions.”

You can download the entire document in English—or just the executive summary—for free. The executive summary also will be available in Chinese, Japanese, and Korean by November, when the second version of “Ethically Aligned Design” is scheduled to be released. Those who have additional suggestions can contribute them by 15 May using the submission guidelines document. All submissions are to be posted in a public document, to be available next month.

The second version of “Ethically Aligned Design” is expected to contain five more sections, as well as updated content from the eight committees that created the initial version.

The initiative provided the inspiration for seven IEEE standards projects. They include protocols that could avoid negative bias in code, processes to ensure data privacy, and guidelines and certifications for storing, protecting, and using student and employee data in an ethical manner.

GETTING THE WORD OUT

To publicize IEEE’s ethics efforts, members working on the initiative have been attending and speaking at events around the world. They include the World Economic Forum, the International Association of Privacy Professionals Global Privacy Summit, and the Conference on Governance of Emerging Technologies.

A campaign on Twitter, #OurAIVision, is asking people to post their ideas for what else the “Ethically Aligned Design” document should cover.

Although ethics efforts are underway elsewhere, IEEE’s efforts might play a special role, Chatila says. “Whereas other initiatives are either part of some industry group or country, this initiative is global,” he says. “I believe this is the place where all these other initiatives eventually could converge.”

This article is part of our May 2017 special issue on ethics in engineering.

Learn More