With self-driving cars already on the roads, and robots replacing factory jobs and helping with tasks around the home, it’s clear that artificial intelligence technology is a growing part of our lives. Designing AI systems to ensure that ethical and societal implications have been considered, though, and regulating their development and deployment, are lagging behind.
That was just one of the issues addressed at the IEEE TechEthics conference, held on 13 October at the National Academy of Sciences Building in Washington, D.C. The event brought together AI experts, including technologists, ethicists, educators, and policymakers.
The IEEE TechEthics program was launched last year. Its goal is to showcase IEEE’s role as a thought leader when it comes to the ethical and societal implications of technology and, in the process, establish IEEE as a trusted resource for the field.
“The IEEE TechEthics conference provided a platform to discuss the challenges presented by the advancements of artificial intelligence and identified a strong need for policy development and ethics training for the field,” says Mark A. Vasquez, strategic program senior manager for IEEE who oversees the TechEthics program. “The discussion from the conference will help IEEE formulate a role in this area that the organization is uniquely suited to fill.”
The Institute attended the event. Here are some highlights.
FLAWS IN THE SYSTEM
Kicking off the conference was IEEE Fellow Rodney A. Brooks, founder of startup Rethink Robotics, which manufacturers industrial robots. He spoke about the difficulty of gauging AI’s progress, and he questioned how long it will be until machines are advanced enough to entirely replace humans—a concept known as artificial general intelligence.
“We’re not close to an AI takeover,” Brooks reassured the audience. “Let’s not sensationalize the technology.” He added that although Elon Musk and Jeff Bezos are among the greatest entrepreneurs of our time, it doesn’t mean they’re always doing the ethical thing.
Discussion points included whose responsibility it will be when AI systems go awry. If machines are designed to learn on their own, then how can engineers predict what they will do? One example presented in the “Influencing the Next Generation of Engineers via Ethics Education” talk was the Microsoft chatbot, Tay, introduced last year.
The bot was part of an experiment conducted in conversational understanding in which it learned how to engage with others through Twitter. The more you chatted with Tay, Microsoft said, the smarter it would get, learning to engage people through “casual and playful conversation.” But in less than 24 hours, Tay was tweeting misogynistic and racist comments due to language it picked up from other Twitter users—prompting the company to shut it down.
Speaker Deborah G. Johnson, professor emeritus of applied ethics at the University of Virginia, said she believes a company that introduces a harmful AI application is responsible for its actions. The company’s engineers must conduct due diligence by testing and incorporating technical standards until they believe the application is safe enough for consumers to use, she says. Several participants expressed skepticism that it is possible to foresee every possible unintended consequence, but Johnson argued that such thinking excuses engineers of liability.
A potential life-threatening example of AI technology was presented during the “Self-Driving Cars and Beyond” panel discussion. IEEE Senior Member Missy Cummings pointed out a real-life scenario in which researchers placed four black-and-white stickers on a stop sign. A self-driving car interpreted the sign to be a speed-limit sign and sped up rather than stopping; human drivers would have recognized the red octagon shape as a stop sign. “The computer vision systems on these cars are extremely fragile,” Cummings said. “The technology is not ready. Their sensors are immature.”
TALKING TO LEGISLATORS
Cummings said that when she meets with policymakers, they are surprised by the examples she gives of why autonomous cars and other technologies are not ready for widespread use. She is advocating that self-driving vehicles be marked so that other drivers have the option of keeping their distance. An autonomous car or truck traveling at high speeds on the highway could do a lot of damage, she adds.
Unfortunately, industry has a lot of influence on government regulations, panelists said. And with 10 million self-driving cars expected to be on the road by 2020, panelist Jason Borenstein, director of graduate ethics programs at Georgia Tech, called self-driving cars “a social experiment on a grand scale.”
IEEE Member Terah Lyons called working with legislators a “contact sport.” A participant on the “Social and Personal Impacts of AI” panel, she said the onus should be on technologists to influence laws and regulations in their areas of expertise. Lyons, a former advisor for the Obama administration’s Office of Science and Technology Policy, said there is a lot of technology illiteracy in government.
She also pointed out that there are different attitudes among countries when it comes to regulating AI systems. Japan, for example, has an older population and is building robots to fill a worker shortage. The United States fears AI will lead to mass unemployment.
Related to the TechEthics program’s objectives is the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, led by the IEEE Standards Association. The initiative launched “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing With Artificial Intelligence and Autonomous Systems” last December with an updated version to be released this year.