What ‘Frankenstein’ Can Teach Us About the Future of AI and Robotics

IEEE TechEthics panelists discuss modern innovations in light of the 200-year-old novel

15 May 2018

In the wake of artificial-intelligence advances, Stephen Hawking, Elon Musk, and other tech icons have raised concerns about how the technology might have harmful, unintended consequences. Many AI technologists are struggling to address the ethical implications behind their creations.

The protagonist in Mary Shelley’s Frankenstein faced a similar dilemma. Shelley’s novel, published in 1818, tells the story of Victor Frankenstein, a scientist who brings a grotesque, sentient creature to life in his lab and later realizes he has created a monster. In the end, the scientist rejects his creation, and the monster goes on a violent, vengeful rampage. Many of the book’s themes resonate today, as technology gets more sophisticated and its potential effects on society remain largely unpredictable.

In honor of Frankenstein’s 200th anniversary, IEEE TechEthics hosted a virtual panel on 1 May that brought together historians, technologists, and philosophers to discuss how lessons from the novel apply to today’s ethical dilemmas about AI and robotics.

Participants included IEEE Member Peter Asaro, associate professor of media studies at the New School, in New York City; Senior Member Dominik Boesl, vice president of consumer-driven robotics at Kuka, an automation company in Augsburg, Germany; and Lisa Nocks, a historian at the IEEE History Center, in Hoboken, N.J. Jean Kumagai, senior editor at IEEE Spectrum, moderated the panel, and Mark A. Vasquez, IEEE TechEthics program manager, gave the opening and closing remarks.

Here are some highlights from the discussion.

TO FEAR OR NOT TO FEAR

Boesl said he doesn’t view robots as a direct threat to humankind—at least not in the foreseeable future. “Prolific technologists have stoked fear by implying that robotics and AI will replace people,” he said. “Some think this dramatic shift is lurking right around the corner, but it’s really not.

“I think the real problem is anthropomorphism, or the concept of assigning machines human qualities—we want to create things that have a soul and are animated like us.”

We might never get to that point, he said, but “we’re developing robots that seem as though they can feel emotions or act out intentions on their own.”

Although robots are not self-aware, AI can harm people, Asaro noted, citing a soon-to-be-published paper he has written on predictive policing. The paper examines two programs in Chicago. One involved feeding data into an AI program to identify people who might be involved in a shooting, prompting police to keep a close watch on them. Another used that data to identify at-risk youth and tried to find them part-time jobs and other ways to keep them off the streets.

One problem with the first approach, Asaro said, is that people were being treated like suspects before ever committing a crime, simply based on data. Another problem was that the same people identified as potential perpetrators were just as likely to be victims of gun violence themselves.

“We have to consider how we’re applying AI,” he said, “and ask ourselves, Who are we caring for and what is the impact?”

PREDICTING CONSEQUENCES

Kumagai asked the panelists about unintended consequences of new technologies, such as taking jobs away from people, and whether developers should take responsibility.

Boesl said many of the consequences are hard to predict during the research phase. “When the Internet first came about,” he said, “who could have known that years later people would become addicted to watching videos on their mobile phones, for example?

“Those of us who work in robotics and automation know we are tampering with a disruptive technology. The best we can do is raise awareness among engineers that they should take a step back during the development process to consider the unintended consequences of these innovations.”

Nocks brought up another issue with automation: unemployment. “The idea behind robotics has always been to take on tedious or dangerous tasks and save people time,” she said. “But if you create a machine that removes the need for human labor, what happens to those workers who are out of a job?”

Asaro pointed out that the decade before Frankenstein was published included the height of the Luddite movement in England, in which textile workers were displaced by machines. “They organized and smashed equipment, and even killed some of the textile mill owners,” he noted. Their earnings and status were impacted for generations because of automation.

“That’s not what we want to happen with the automation revolution in the future,” Asaro said. “We want to plan ahead.”

PREPARATION IS KEY

So what can technologists do now before, like Frankenstein, they realize they’ve created a monster? Nocks argued that making things faster, cheaper, and more convenient shouldn’t be the only motivating factors for developing technologies. “We also have to think about how we’re impacting people and the environment,” she said. “For example, some of my students worry that grandchildren will stop visiting their grandparents because a robot is taking care of the elders.”

Boesl added, “AI has very real outcomes, like job loss and even loss of life, when it comes to drones and automated warfare. We have to discuss these consequences in advance.

“When an engineer realizes there’s a problem or there could be a problem, that person has to take responsibility.”

Asaro also highlighted the need for ethical considerations in AI design and policy. He and Boesl are part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, an activity that brings together technologists, ethicists, policymakers, business leaders, and end users to ensure those involved in developing technologies are educated, trained, and empowered to make ethical considerations a priority.

 “With areas like AI, we understand the basic functionality, but there’s no way to understand the full scope,” he said. “As we look at particular algorithms in health care or self-driving cars, we can see what problems will arise and address them by writing and implementing standards.”

Learn More