In the December issue of Scientific American, author David Weinberger reports from the frontiers of knowledge. His story "The Machine That Would Predict the Future" explores the promise of the FuturICT project, an attempt to build a computer model of all the social, economic, ecological and scientific factors at play in the world. Weinberger is one of our most incisive thinkers about the digital age, a senior researcher at Harvard University's Berkman Center for Internet and Society, the author of books such as Small Pieces Loosely Joined (Basic Books, 2002), Everything Is Miscellaneous: The Power of the New Digital Disorder (Times Books, 2007), and the upcoming Too Big to Know (Basic Books). Technology editor Michael Moyer caught up with him at the Forum d'Avignon (by phone, sadly) to talk about his upcoming book, his December article and the future of knowledge.
What's the book about?
Too Big to Know is about what happens to knowledge when it becomes a network. The basic idea is that the properties of knowledge that we've taken for granted at least in the West for, oh, 2,500 years are not actually properties of knowledge. They're properties of knowledge when its medium is paper. And when you remove the paper and put things online, it takes on the properties of its new medium—of the Internet. Importantly, knowledge in a network includes differences and disagreements in a way that traditional knowledge is uncomfortable with. Everything is unsettled, everything is argued about, and very few things are ever totally resolved on the Net.
We live in a world where in the past few years the nature of facts has been called into question. You have political entities challenging each other on what is real and what is not. This to me is appalling, since there are facts that are true and statements that are lies. But you're saying this is going to become a more accepted view of what knowledge is?
It's certainly the case that the world is one way and not other ways. There are facts and mistakes, and lies as well. And it would, I think, be a terrible thing if we gave up on facts and said, "Everything is equally true and you believe what you want." That would be a disaster in every regard. Nevertheless, one of the things the Net has taught us is that, like it or not, we are generally not all going to agree.
When Sen. Daniel Patrick Moynihan said famously "Everybody is entitled to his own opinions, but not his own facts," we heard that as offering us some comfort—that if we just get all the facts out, then we will all come to agreement, that there is a basis that will allow us to agree. I'm saying that there are facts, but we're never going to agree about them. That doesn’t mean that there are no facts and that we should all give up—quite the contrary. But it does mean that facts are not going to serve as the basis for agreement that we were hoping for.
You write about FuturICT with a skeptical eye. Why are you dubious about the possibility of a centralized model (or group of models) to create accurate predictions about the future?
In part it's the over-optimism about our ability to gather all the relevant facts. There are an infinite number of facts in the universe. We're very limited in our abilities, and it's hard to know which facts are going to be relevant. History keeps teaching us that we can't recognize the important events that are going to trigger changes.
Second, it assumes a type of linearity that science has been moving away from—I think productively and accurately. To hope that you can specify all the elements of a model sufficiently well that you can predict events that might emerge based upon very small differences in starting conditions—or in the interaction of multiple systems that you may have ignored—is to dream a happy dream. It seems increasingly that we're learning that that won't work. The world is deterministic, but it's chaotic and emergent. And that's a difficult thing for the prior generation of rule-based models to account for.
Do you think that it's impossible to accurately model the world, then?
I am most hopeful about the clash of multiple models, with multiple sensibilities, multiple starting points and multiple presuppositions. This is very different than thinking you can encapsulate all knowledge into a model that you can then turn the crank on. It seems to me that the world is so chaotic—and that we are so limited by our own perspectives—that the most likely way to advance is through the clash of different perspectives, different data sets, different prejudices, different blind spots. That's why I like so much John Wilbanks's expression. He's the head of science commons at Creative Commons. He says that "we really need to have your nerds argue with my nerds." That's how we flush enough error out of the system so that we can have some confidence that within some limited range of social behavior that we're going to get some degree of accuracy.
But if the goal is to know what's going to happen tomorrow, how are you supposed to pick which of the competing models to use?
The alternative is to let the machine pick which one to use. There aren't perfect alternatives here. I'm assuming that every model and every human brain is just so prone to error, that that is the primary human condition we need to take account of. Either we trust the machine to get it right—which seems to me unrealistic given the complexity of the world and the limitations of the human mind—or you say "well, knowledge has this political component in which people contend (we hope) based on facts and on arguments and on models to try to see what works, but in the end we make a decision to the best of our ability on the complex factors that come into play when humans make decisions."
It seems to me that this is not a cause for despair. Rather, this is what the scientific project is. Knowledge in the Internet Age—networked knowledge—is becoming more like what knowledge has been in the past few hundreds years for scientists: it's provisional, it's a hypothesis that is waiting to be disproved. Knowledge is now accepted as the best we humans can do at the moment, but with the hope that we will turn out to be wrong—and thus to advance our knowledge. What's happening to networked knowledge seems to make it much closer to the scientific idea of what knowledge is.