Artificial intelligence could be humanity’s best—or its last—invention. That’s the opinion of Stephen Hawking along with other leaders in the AI field, noted in a 2014 article in The Independent. Much of the media portrays a negative perception of AI, publishing articles and airing news segments about the technology with images of The Terminator. However, not many members of the media are asking what can be done to reap the benefits of the technology and avoid the risks.
Those questions require a deeper look, which is why the IEEE Standards Association formed the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, for which I serve as vice chair. In the past year, the initiative brought together more than 100 experts to collaborate on the report “Ethically Aligned Design.” The document strives to address how to design such systems with moral values and ethical principles in mind so that the systems can behave in a way that are beneficial to people, and build trust. This includes not invading individuals’ privacy without permission and being accountable for decisions made.
The report also includes three proposed IEEE standards projects, which are now in the works. They cover a process for addressing ethical concerns during system design, transparency of autonomous systems, and data privacy process.
PROPOSING A NEW POSITION
We want to see our work put into practice within companies and industries across the globe that are working on artificial intelligence and automated systems. To do so, our recommendation is to create the position of a chief values officer at such companies to prioritize ethical considerations for AI development.
The chief values officer would be responsible for educating employees, preparing company policy, and overseeing the development of products that will directly affect employees’ and customers’ agency, identity, and well-being. Such an officer also needs to be able to ensure whistle-blowing anonymity for employees.
Andrew Ng, chief scientist at Baidu Research, in Sunnyvale, Calif., published an article in the Harvard Business Review stating that to succeed in the future, businesses need to appoint a chief AI officer. To add to his point, anyone in this position will need thorough training in applied ethics that prioritizes the values of users and companies.
Companies working in AI that don’t prioritize ethical considerations risk building products or services that won’t match the values of their customers. Ten years ago companies started to focus on sustainability, and today they also need to pay attention to ethics. Otherwise, the consequences are too great. Invasion of privacy and hacking into our homes are just two.
With that said, those of us who are involved in the initiative do not believe that we have all the answers. We would appreciate IEEE members and others in technical and related fields to review the report and give us feedback. To submit your comments, visit the IEEE Standards website for guidelines or write to EAD_feedback@ieee.org.
IEEE Associate Member Kay Firth-Butterfield is a Senior Fellow and Distinguished Scholar at the Robert S. Strauss Center for International Security and Law, University of Texas, Austin, and vice chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Kay will provide more insight on this topic at the annual SXSW Conference and Festival, 10 to 19 March. The session Ethically-Aligned Design Setting Standards for AI, is part of the IEEE Tech for Humanity Series at SXSW.