When reflecting on the societal implications of technology, there is always a temptation to be overly optimistic or pessimistic. Nowhere is that more obvious than in the field of artificial intelligence. In scientific literature, fiction books, film, and television, most depictions of AI have presented either a utopian or dystopian future.
The IEEE AI and Ethics Summit, to be held 15 November in Brussels, will take a more realistic look at the ethical implications of artificial intelligence. The speakers include not only technologists but also philosophers, social scientists, legal experts, and policymakers. Together, they will consider the social, legal, and philosophical questions associated with AI, such as whether certain applications should be tightly regulated (or even banned) and if it’s possible to program ethical algorithms into machines.
But first, a look at the current discussion.
THE ONGOING DEBATE
As I said, when technologists discuss AI’s social implications, they tend to lean toward either utopian or dystopian perspectives.
In the utopian future, AI will make it possible to eradicate poverty, disease, and hunger, while reversing climate change. AI offers people unlimited potential by adopting advanced computer technologies. Automation, robots, and humanoids will be used to do things people don’t want to do, perhaps because they consider them boring or dangerous. When it comes to improving quality of life, AI is an attractive option.
In the dystopian future, AI machines will destroy the human race. Without considering the unintended consequences, we could create machines that on their own consider the continuity of the human race to be of low priority, irrelevant, or even detrimental to their existence—if they notice us at all.
A 2014 opinion piece on AI developments coauthored by Stephen Hawking characterized self-driving cars and virtual personal assistants as “symptoms of an IT arms race.”
Hawking’s coauthors were Stuart Russell, an English computer scientist; Max Tegmark, a Swedish-American physicist and cosmologist; and Frank Wilczek, an American theoretical physicist and a Nobel laureate. While acknowledging AI’s enormous potential benefits, the authors of the article, “Transcending Complacency on Superintelligent Machines,” highlighted the risks, challenging us to consider this: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
They also suggested that AI’s impact is “potentially the best or worst thing to happen to humanity in history” and asked readers to contemplate “what we can do now to improve the chances of reaping the benefits and avoiding the risks.”
We will share thoughts and perspectives on those and other topics from some of our distinguished panelists in a future blog post. To register for the event or to receive more information, visit the IEEE Summit website.
IEEE Senior Member Paul M. Cunningham is incoming president (2017–2018) of the IEEE Society on Social Implications of Technology, the projects chair of the IEEE Humanitarian Activities Committee, and a member of the 2016 IEEE AI and Ethics Summit Program Committee.