IEEE Member Brian David Johnson, Intel’s futurist, covered the machines of tomorrow and how they will influence our lives in a recent talk to IEEE employees in Piscataway, N.J.
He admitted in that session that “futurists do not predict the future, they build it.” To do what he calls “futurecasting,” he looks to scientific research, trends, and even science fiction, to provide Intel with a pragmatic vision of the world a decade from now. Johnson is involved with IEEE, working on strategic planning with its Board of Directors. He also writes a regular column in IEEE’s Computer magazine.
To continue the conversation, Johnson responded to questions from staffers. Here is what he had to say about how he got the job of futurist, artificial intelligence, and the fate of Moore’s Law.
Q: How did you become a futurist? Did you create your job?
When I first came to Intel, I was a systems architect designing how people would interact with Intel’s technology. I was modeling what it would feel like to use our products. Essentially, I was already doing the job.
When I was offered the position of futurist by our chief technology officer at the time, I was hesitant. It was a huge public responsibility. I take what we do very seriously, and I realize when we work at the global scale we must be accountable for the experiences we are delivering to people. But I did accept it and it was one of the best decisions of my professional life.
Q: Is your job as a futurist based on some science of rules and trends, or mostly based on imagination?
My job and my futurecasting process are rooted in science. I use social science, technical research, economics, and expert interviews. I use this fact-based approach to model what it will feel like to live in the future.
But along with that, imagination is an important part. We have to imagine the future we want and the future we want to avoid before we can go about building it. Nothing amazing was ever built by humans that wasn’t imagined first.
Q: In the next decade, what kind of technology will we see integrated into the workplace that doesn’t exist today?
I’ve done a good amount of research on the future of work. Over the next 10 years, we’re going to have four generations in the work force. Technology will play a key role because the software will be able to know you as an individual. That’s great for security reasons, but it’s also great because your work machines will be able to tailor and optimize themselves for your working style on a day-by-day or even hour-by-hour basis. In the future, our machines should be able to recognize and accommodate our habits and routines.
Q: How much further into the future do you think Moore’s Law will last?
You always have to remember that Moore’s Law isn’t a scientific law. It’s been an inspiration for Intel and the industry, but it is not a fundamental part of the universe. It’s not like Newton’s Three Laws of Motion.
Around 2007 I had argued the relevance of just making chips faster wasn’t good enough. Simply continuing to fulfill Moore’s Law isn’t good enough. We have to tailor the experience of computers to what people need. And when you do that, often Moore’s Law takes a backseat. With that said, the laws of physics tell us Moore’s Law will have to end. But for the next 10 to 15 years, it will still be relevant.
Q: How much of the future will resemble “I Robot,” with artificial intelligence (AI) getting more involved with our lives to the point that we can’t always control it?
I do a lot of work in AI. Most people don’t realize AI is already a huge part of our lives. When your plane lands, when you order a pizza, or when you buy a book online, AI is helping. We will have more and more AI applications because the technology is simply a tool and can help us in myriad new ways. In sectors like energy, transportation, and health care, smarter machines can make our world more sustainable, efficient, and healthier.
Will AI replace humans? Nope! We always have to remember we are in control. It’s up to us what the technology does, even when we are programming it to be autonomous. But we have to accept that responsibility and talk about what we want and don’t want from AI. This debate is very relevant and current at the moment.
Q: What future technological development do you dread the most? What scares you about the future?
What scares me about the future has little to do with technology. Technology is just a tool. A hammer is just a hammer and is only interesting when you use it to build a house. What we do with technology and what we do with machines is what’s important.
By that gauge we need to hold ourselves responsible for the machines we build. We need to understand that we imbue our technology with our humanity, and with our hopes, dreams, and values. What scares me is when people give up that power and believe technology is in control.
Q: How do you incorporate the variances of a global marketplace when envisioning the future? Do you find some areas more likely to embrace the upcoming changes you foresee?
Typically, it comes down to people and their vision of the future. Some embrace the future while others worry about what it will bring. Usually, the reality of the future lies somewhere in the middle—with both the good and bad.
There have been several studies that have shown people, communities, and governments that talk about the future—just talking and thinking about it—have a greater level of prosperity. It seems that simply engaging in a conversation about the future is good for the future!
What do you think the future will bring? Share your ideas in the comments section below.