For a PDF of the entire interview, click here.
Scientific American: Do you plan to continue your commitment to research?
Bill Gates: Yes, our research has had a phenomenal payoff for us and for our users. We are dependent on our research, whether it's for ensuring ultrareliability [or] deep security or for making it simple to deal with all the information that we've got. It's the advances out of our research lab that make us optimistic that we'll be solving these tough problems.
SA: Some critics have said that there is an unbelievable collection of talent here but that there have not been achievements on the order of things like the transistor. Do you see any validity in that?
BG: Well, we do software. And if you look at the papers at Siggraph [a computer graphics conference] and the proportion coming out of our one lab, you see us in many different areas. We wish there were other labs doing more. We are a very high percentage of the nonuniversity work being done in many of these fields. Typically in the computer field, most of the companies don't have long-term research. They just don't.
Take what we've done in machine translation--no, that's not as good as the transistor, but it's pretty phenomenal. The stuff we're doing with speech, pretty phenomenal. Electronic ink. Software reliability. If we weren't able to prove [test and validate] programs, we wouldn't be able to get the Internet to achieve its potential. An investment of the size we're making will only be judged 20 years from now.
SA: Do you see continued relevance in the concept of artificial intelligence [AI]? The term is not used very much anymore. Some people say that's because it's ubiquitous, that it's incorporated into lots of products. But there are plenty of neuroscientists who say that computers are still clueless.
BG: And so are neuroscientists [laughter]. No, seriously, we don't understand the plasticity of the neurons. How does that work? We don't understand why a neuron behaves differently a day later than before. What is it that the accumulation of signals on it causes?
There is a part of AI that we're still in the early stages of, which is true learning. Now, there's all these peripheral problems--vision, speech, things like that--that we're making huge progress in. If you just take Microsoft Research alone in those areas, those used to be defined as part of AI. Playing games used to be defined as part of AI. For particular games it's going pretty well, but we did all this work without a general theory of learning. I am an AI optimist.
We've got a lot of work in machine learning, which is sort of the polite term for AI nowadays because it got so broad that it's not that well defined.
SA: Are enough people going into the computer sciences?
BG: That was the big theme of my recent tour to colleges throughout the U.S. It's a paradox that this is the most exciting time in computer science and these are the most interesting jobs. You can see the work being done to really improve the creativity and effectiveness of hundreds of millions of people. These jobs should be way more interesting than even going to Wall Street or being a lawyer--or, I can argue, than anything but perhaps biology, and there it's just a tie.
And yet the number of people going in has gone down, and it's hard to measure whether we are getting the best and brightest. There is this huge disparity. We're getting the best and brightest in China and India, and the numbers are just going up there. Does that mean that this country will have to let those people come here, or does it mean the good work in the future won't be done here? So we really need a rededication to what's made the U.S. such a leader.
SA: Why are people less attracted to these jobs here?
BG: Oh, it's partly that the bubble burst. It's partly articulating the benefits of the field and the variety of jobs. People have to know that these are social jobs, not just sitting in cubicles programming at night. Our field is still not doing a good job drawing in minorities or women, so we're giving up over half the potential entrants.