Experts Answer Your Questions About Artificial Intelligence

Learn about recent advancements and the potential for a robot takeover

30 June 2016

Artificial intelligence has become a larger part of our everyday lives. In its June special report, The Institute covered AI’s latest developments.

Here to answer your questions about the field are three experts: IEEE Fellow Li Deng, chief AI scientist at the Microsoft Applications and Services Group; IEEE Fellow Fatih Porikli, the computer vision group leader for Data61 at NICTA, Australia’s information and communications technology research center, in Canberra; and IEEE Member John C. Havens, author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and executive director of the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems, led by the IEEE Standards Association.

Q: What tasks is AI not good at solving?

DENG: Those that require creativity, intuition, and common sense to solve problems. For example, the creation of scientific theory, such as relativity and quantum mechanics, requires huge amounts of creativity and ingenious experimentation. AI will not be able to do that for a long time.

PORIKLI: To date, AI has been successful at tasks that are precisely defined, carefully formulated, and rigorously modeled by humans. Humans, not AI, possess the unique ability to conceptualize and distinguish valid solutions to problems. We can understand cause and effect, and we can think outside the box. We can solve problems by developing ideas that may not be immediately evident and obtainable by using traditional step-by-step logic. We can invent.

Such creative, critical, lateral, and holistic thinking capabilities are currently not available to AI systems. For instance, AI algorithms are not good at discovering and defining a problem. There is no AI mathematician. AI tools are designed to give answers in the same way that abides to their training datasets—without questioning. Part of intelligence is to be able to notice a problem without someone first describing what the problem is.


Q: What will be the next big development or research area in AI?

PORIKLI: Computer vision is one of the most promising disciplines in which the next big development will likely come (see “IEEE Fellow Sets the Bar for Computer Vision”). That is one reason why I’m working in computer vision.

Another area is in systems engineering. Current AI research mainly provides compartmentalized solutions for specific tasks. The next big wave of research may focus on how to design and manage complex systems to deal with intelligent workflow processes, coordination methods, and uncertainty management tools.

DENG: The next big developments will be machine learning and reasoning that are effective beyond straightforward prediction tasks, which currently require big data and huge amounts of computing power. Moving forward, machines will likely be more effective with smaller amounts of data and with little training or help from programmers.


Q: Humans experience mental growth, also referred to as maturity. Will AI systems be able to do the same, or will they be more static than humans are?

DENG: Yes, future generations of AI will have this type of maturity and will be able to automatically grow with new knowledge.

PORIKLI: AI systems are likely to be constantly improved with ever-increasing amounts of data to excel further in the specific tasks for which they are designed.


Q: Let’s assume we have developed an AI system whose behavior is indistinguishable from that of a human. What rights, if any, do we grant it? Should it have workers’ rights, for example?

HAVENS: This is a very timely question, because a draft report was submitted to the European Parliament on 31 May with recommendations to the Commission on Civil Law regarding rules on robotics. It contains a paragraph calling on the commission to explore the implications of possible legal solutions, including that of “creating a specific legal status for robots so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.”

Where an AI would be indistinguishable from a human, the question of rights should be in regard to the individuals with whom the AI interacts, versus the system or device itself. For instance, if the AI causes physical harm or if a companion robot is negatively influencing or manipulating its user’s emotions, the manufacturer should be responsible.

In the future if an AI was to gain any genuine autonomy or sentience, then I believe it should be given the rights of “electronic persons,” shifting from the manufacturer being responsible for the actions of the AI to being an individually protected entity or citizen. It’s also at this time the AI should be released from any status of ownership or intellectual property ties to its manufacturer. Otherwise, it’s a form of cyberslavery to keep the AI individual as property of its manufacturer.

PORIKLI: No. Why should there be such rights? We should distinguish fact from fantasy. We are not talking about the I, Robot movie, in which humanoids have animal-like emotions. We are dealing with software running on silicon processors. Rights are contained in the realm of moral consideration and granted to those endowed with perception, emotion, and real intelligence. It would be unfair to limit life to a mere mimicry of machines. In the end, an AI system, however complex, is still a device such as a thermostat responding to sensors or a character in the SimCity game responding to mouse clicks.


Q: When will personal-assistant applications, like Apple’s Siri and Amazon’s Alexis, be able to communicate with us as flawlessly as we do with other people? Will they be able to then also communicate with one another? And will they be able to be intuitive and provide us with information we need without us having to ask?

PORIKLI: It may depend on the mode of communication, as we do not only communicate verbally but also with the language of body, eyes, gestures, touch, and overall appearance. If the mode is email, the time AI portals will communicate with us flawlessly is unfortunately not far in the future. Beware of AI spammers! As far as communication with one another, those of us in the field have been paving the way for this since the dawn of the Web—which will now be made possible by the Internet of Things.

HAVENS: Your questions reflect issues brought up in the Turing test, the famous experiment Alan Turing created that can identify when a person believes a machine or device is “real.” While I realize these questions pertain to the maturity and accuracy of systems like Siri, many individuals already interact with these tools as if they were people. While we all know “she” isn’t real, we also cross a mental divide of sorts, where we’re speaking with a program as if it were a person.

Unfortunately, most of the communications from these devices have been designed as personalization algorithms for commerce. While there’s nothing wrong with helping people choose what to buy, defining individuals as strictly consumers ignores a holistic sense of personhood and what we actually need versus what is being sold to us.


Q: How close can we get to AI being a replica of the human brain? What is the best way to design it to get there?

DENG: We’re far away. First, engineers need to fully understand and exploit brain mechanisms essential for human intelligence—which science has not yet been able to do.

PORIKLI: On average, the human brain has 86 billion neurons and possibly 10 times as many neuroglia—which support and protect neurons—playing an active role in communication and neuroplasticity. Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 150 trillion synapses. The largest deep neural networks consist of 160 billion parameters, most of which are on the fully connected layers. It is not unreasonable to consider a synapsis to correspond to a network parameter, and perhaps a neuron to a convolutional filter.

The best AI network is still a thousand times less complex than the human brain. Because deep networks are computationally intensive, for any realistic training we would need much stronger parallel processing platforms. For this, one option is to design graphics-processing units that have 3-D circuitry—which would drastically increase the number of parallel processors. Maybe the other option would be growing a network of real brain cells in the lab and developing technology—currently nonexistent—to access and control each cell, meaning to literally construct a brain.


Q: Science fiction writer Isaac Asimov postulated rules for smart robots to follow that would keep them from turning on humans. Is it possible for AI machines to follow such rules, specifically when in a self-preservation mode? Humans have religion and ethics to keep them in line, but even that doesn't always work.

HAVENS: Asimov’s laws provide a great thought experiment regarding the ethics surrounding the control of AI or autonomous devices. But the conundrums he describes in the short story “Runaround” demonstrate why a simple list of rules alone can’t be universally applied to every situation an AI system encounters. The need for this larger type of framework is the inspiration behind the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. We’ve created a charter code of conduct. It will be available in September. It is focused on such issues to help technologists deal with tough ethical concerns regarding the implementation of AI and autonomous technologies.

Scenarios of robots operating for their own purposes would be the result of poor design, versus premeditated malevolence. That’s why prioritizing ethical considerations is key.

PORIKLI: Self-preservation is a universal behavior of all organic life to ensure the survival of self and species. I tend to think AI machines will not acquire collective wisdom and intuition, mainly due to their designers’ limitations of building such complex machines. The machines will excel at whatever purpose they were designed for, but there is little reason to believe they will ever aspire to self-preservation. Why should self-preservation be the answer for a machine that does not feel and live? Perhaps, on the contrary, they will look at our society and then will start destroying themselves. Who knows?

Learn More