Much of intel’s success as a microprocessor manufacturer over the past four decades has come from the company’s ability to understand and anticipate the future of technology. Intel co-founder Gordon Moore famously asserted in 1965 that the number of transistors that can be placed on an integrated circuit would double every two years. This assessment, which came to be known as Moore’s Law, proved to be a highly accurate prediction of what his business could accomplish with generous research and development investments and a meticulous product road map.

As Intel’s microprocessors grew smaller, faster and cheaper, they helped to give birth to personal computing and mobile devices that once existed in the realm of science fiction. So it comes as no surprise that science fiction serves as a key inspiration for Brian David Johnson—Intel’s official futurist and the man who is paid to craft visions of both Intel’s prospective technologies and what coming years hold for the entire computing industry.

One of Johnson’s tasks in his unusual role is to promote Intel’s Tomorrow Project, launched last year to engage the public in discussions about the direction of computing, as well its impact on society. As part of the Tomorrow Project, Intel also publishes science-fiction anthologies featuring short stories (and including introductions by Johnson) that place emphasis on what hard science portends rather than fantasies that break the laws of physics. All these tales, though, are intended to convey the message that humanity ultimately still controls its own destiny.

Scientific American recently spoke with Johnson about what scares people most about the future of technology, what we can learn from the past and what it takes to become a prognosticator (nature or nurture, or a little of both?). Excerpts follow.

Scientific American: What will it will feel like to use a computer in 2020?
Johnson: Well, I have good news and bad news. Which do you want first?

Let’s start with the bad news, which really isn’t bad—it’s more pragmatic. In 2020 using a computer will feel remarkably like it does today in 2011. We will still have keyboards and a mouse, touch screens and voice controls. We will still surf the Web and chat with our friends, and many people will still have way too much e-mail in their inbox. I don’t think this is a bad thing. I actually find it quite comforting, but it lacks the sizzle of jet packs and rocket cars. Now, let’s get to the good stuff.

In 2020 using a computer will be awe­some. Just like the mouse, the touch screen and voice [control] radically changed how we interact with computational systems, so, too, will sensor networks, data aggregation and the continuing miniaturization of computational power. I don’t really make pre­­dictions, but the one thing I can tell you about the future is that we will have more computers and more computational power and that computational power will get further knit into the fabric of our daily lives.

Imagine being able to program your computer just by living with it and by carrying it around with you in your bag. I find this incredibly exciting because it means that how we design and build these systems, how we write the software, how we come up with the cool new apps and great new services, is completely different from how we have in the past 10 years.

Give us an idea of some cool, speculative research that you are following.
I love looking at the parallels between personal computers and synthetic biology [the use of DNA, enzymes and other biological elements to engineer new systems]. Look at how the personal computer grew a little bit out of the counterculture movement, out of the hippie movement, out of the work Intel did and Steve Jobs did and Woz did [Apple co-founder Steve Wozniak]. You see the sort of hacker clubs and small groups of enthusiasts they formed, and you realize what’s going on in synthetic biology right now—it’s really, really similar. A lot of it is being done by people under the age of 20; a lot of it is being done by enthusiasts who are just getting together and talking about it. Then you can say, if this is true, let’s try to judge whether synthetic biology and personal computers are developing at a similar pace. This can help when projecting the future of various technologies.

Have you been doing any actual research on synthetic biology?
I’ve been doing a lot of work with a synthetic biologist named Andrew Hessel, who collaborates with the Pink Army Cooperative, the folks [promoting individualized therapy for breast cancer]. He’s studying the design of viruses, as well as DNA. Think of DNA as the software and an organism—bacterium or virus—as the hardware. You stick the software in, and it actually becomes a computational device.

Consider this, you take a GPS app and put it into your cell phone, and your cell phone becomes a GPS. But what’s really awesome about synthetic biology is that you go to sleep with one organism, and when you wake up in the morning there are two, then there are four. They become self-replicating computational devices.

Any ideas?
One fun example that Andrew and I were kicking around was a way to solve “the last mile” for network connectivity. This is literally the last mile between the network hub and your house or apartment. Imagine now that we engineered an organism so that it was an excellent conductor for that Internet signal, better than the cable and copper wires we’re using today. Now all we have to do is lay down our little organism between your house and that network hub, and you’ll be downloading HD movies day and night.

But how do we do that? Well, what if we crossed our superconducting organism with grass seed so that it looked and grew and could be maintained like grass. Imagine everywhere you see grass that could be a superconducting mesh network that brings the Internet anywhere it grows. And it’s alive! Anyone who has ever taken care of a lawn knows that if you treat it right it just keeps growing, sometimes even popping up in places you don’t want it. Lawn maintenance and network maintenance become the same thing. That grass median that runs down the middle of many highways across the world could literally become the information superhighway.

How can science fiction influence real-world research and development?
There’s a great symbiotic history between science fiction and science fact—fiction informs fact. I do a lot of lectures on AI [artificial intelligence] and robotics, and I talk about inspiration and how we can use science fiction to play around with these ideas. Every time people come to me, pull me aside and say, “You do know the reason why I got into robotics was C3PO, right?” I’ve become a confessor to some people. I just take their hand and say, “You are not alone. It’s okay.”

Science fiction inspires people to what they could do. It captures their imagination, which is incredibly important for developing better technology.

How did you become Intel’s futurist?
I had been using future casting—a combination of computer science and social science—as part of my work on Intel projects. Before becoming Intel’s futurist, I was a consumer experience architect at Intel. This is like a software or silicon architect, except that I designed the entire experience that people would have. A consumer experience architect is part engineer, part designer looking five or 10 years out, toward, for example, the design for system-on-a-chip (SOC) processors, the new type of chip we’re putting together with a smaller form factor [meaning smaller dimensions]. Future casting helped us ask ourselves hard questions about the future of technology and figure out what to build. So [Intel’s chief technology officer] Justin Rattner said to me, “We think you should be Intel’s futurist.” And I said, “No way.” That’s a huge responsibility, especially for a place like Intel.

At the time, Justin wanted me to get out there and start talking to people about the future. We had such discussions internally, but we hadn’t been talking about it with others outside the company. The next week [June 30, 2010], we released the book Screen Future: The Future of Entertainment, Computing and the Devices We Love, which was about technology in 2015. I sat down and talked to the press. Almost everyone said, “So you’re Intel’s futurist.” At that point, I realized that I already had the job.

How does your role as future caster for Intel fit in with what the company is doing as a maker of microprocessors?
I sit in front of the company’s development road map. So I work with a lot of the chip designers in Israel and elsewhere. And every year they remind me that I need to be thinking about, for example, 2020. I create models of what the experience will be like, what it will feel like to use a computer in 2020. Intel is an engineering company, so I turn that into requirements and capabilities for our chips. I’m working on 2019 right now.

How do you ensure that the ideas you have for Intel’s future are compatible with the direction that hardware makers (Apple, Dell, and so on), who use Intel chips in their PCs, want to take their products?
The first step in my process is social science. We have ethnographers and anthropologists studying people first and foremost. So all of the future-casting work I do starts with a rich understanding of humans, who are going to use the technology, after all. Then we get into the computer science. Then I do the statistical modeling. Then I start developing models about what the future is going to look like. Then I hit the road.

A huge part of our work is getting out and talking not just to our customers but to the broader ecosystem of government, the military and universities. I ask them, “Where do you see things going? And what will it be like for a person to experience this future?”

Can you give one example about how looking ahead may have helped—or is helping—design an Intel hardware product?
We don’t just ask ourselves how can we make the chip smaller and faster and less expensive. Now we ask: What do people need to do with the device? What do we want that final experience to be? What will capture people’s imaginations? In Screen Future, I wrote about a future where multiple computational devices are connected, all working together so that the consumer won’t see any difference between his or her PC or TV or smart­phone. For people it will just be about screens: different screens that give us access to the entertainment and the people we love.

Does the act of engaging in imaginative writing help you with your day job?
Writing science fiction has been an integral part of my future-casting process for years. I’ve used it at Intel to explore the human, cultural and ethical implications of the technologies we’re building. Often these stories or science-fiction prototypes are used as a part of the final product specification—this is the requirements document that explains to the engineers and development team what needs to be built. Some of the greatest scientific minds, such as Albert Einstein and Richard Feynman, used creativity and their imagination as a part of their scientific method. When I write science fiction based on science fact, it gives me a really powerful tool to innovate and create technologies that are better designed for humans. Also, engineers love science fiction, so it’s a great way to get across my 10- to 15-year future casts.

What is the greatest misconception that people have about the future?
So many people think the future is something that is set. They say, “You’re a futurist. Make a prediction.” The future is much more complicated than that. The future is completely in motion—it isn’t this fixed point out there that we’re all sort of running for and can’t do anything about. The future is made every day by the actions of people. Because of that, people need to be active participants in that future. The biggest way you can affect the future is to talk about it with your family, your friends, your government.

This article was published in print as "Professional Seer."