The Ethics of Artificial Intelligence

A new book tackles the many questions that arise from robot-human interactions

15 March 2016

This is an edited excerpt from my new book, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, which was published in February by TarcherPerigee.

I first read this quote in the book, Wired for War, by P.W. Singer, which focuses on the ramifications of militarized artificial intelligence.

You can’t say it’s not part of your plan that these things happened, because it’s part of your de facto plan. It’s the thing that’s happening because you have no plan…We own these tragedies. We might as well have intended for them to occur.

—William McDonough, American author and thought leader

Singer makes the point that while McDonough said these words in the context of ecological sustainability, they could just as easily apply to AI. In his book, Singer proposes a “human impact assessment” should be required from any organization before it begins production on an autonomous system or machine.

“This will not only embed a formal reporting mechanism into the policy process of building and buying unmanned systems, but also force the tough legal, social, and ethical questions to be asked early on,” he writes. While Singer is referring largely to militarized systems, I firmly believe this type of assessment should be required for any organization that uses evolutionary or machine-learning oriented algorithms. In the same way an organization is held accountable for its potential effects on the environment, it now would be responsible for its impact due to automation.

“Almost all of our laws are based on the underlying assumption that only humans can make decisions.” In my interview with John Frank Weaver, author of Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws, we discussed how the nature of laws and policy are having to quickly catch up to AI technology that’s already available in products like automated vehicles. “A lot of the new technology that’s coming out is artificially intelligent and autonomous,” notes Weaver. “It’s different from robots and machines of the past because now they can also analyze and judge.” 

Such abilities distance the current AI environment from the type of distrust of technology that took place in in England during the Luddite Rebellion from 1811 to 1813. Skilled artisans fighting against the onslaught of industrialization and cheap labor is a far cry from concerns about how a self-driving car will be programmed in life or death situations. And as The Atlantic notes in its article, “The Ethics of Autonomous Cars,” the lack of any laws regarding non-human actors provides broad opportunities for companies like Google to then push AI technology forward in an ethical and regulatory vacuum. The article quotes Stanford law fellow Bryant Walker Smith, who noted Google’s cars are “probably legal in the United States, but only because of a legal principle that ‘everything is permitted unless prohibited.’”

Well, what else would be permitted then? As Kate Darling of the MIT Media Lab notes in her paper, “Extending Legal Rights to Social Robots,” “Long before society is faced with the larger questions predicted by science fiction, existing technology and foreseeable developments may warrant a deliberation of ‘rights for robots’ based on the societal implications of anthropomorphism [the attribution of human characteristics to animals or an object].”

In terms of robot rights, the basic idea is that since corporations have been legally granted personhood, these rights could extend into autonomous devices they manufacture, whether it’s an Internet-enabled refrigerator, electronic robotic toy Furby, or a sex-bot. When it comes to “companion” or “social” robots, designed to increase empathy or wellbeing in humans, there’s a strong possibility we’ll soon test our ethics regarding AI in the culture and the courtroom. We already feel emotionally attached to our cars—think how our auto-driven relationships will blossom when Siri-like assistants become part of their internal workings. Getting your car stolen could take on whole new repercussions akin to kidnapping, and we haven’t yet figured out as a society how to deal with the ethical implications involved.

John C. Havens is an IEEE member and the founder of the The H(app)athon Project. He is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World and Heartificial Intelligence: Embracing Our Humanity To Maximize Machines. He has contributed articles on technology to Mashable, Slate, and TechCrunch, and has spoken at TEDx and SXSW Interactive.

Learn More