Podcast Transcription
Steve: Welcome to the Scientific American podcast, Science Talk, posted on February 15th, 2012. I'm Steve Mirsky. On this episode:
Hillis: Well Bill and I programmed computers; we now do it only recreationally. I don't…
Joy: (laughs) We mostly respond to e-mail.
Hillis: Right. But in those days, you could hold the whole thing in your head. You know, Bill's probably one of the last people that held the whole operating system in his head.
Joy: Yes, I had the whole thing under my control.
Steve: That's Danny Hillis, co-founder of the Long Now Foundation and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work and Bill is Bill Joy, co-founder of Sun Microsystems and author of the well-known Wired magazine essay, Why the Future Doesn't Need Us. They were both at the Compass Summit in October in Palo Verdes, California, where they and other technology, business and creative leaders met to talk about global challenges and economic opportunities. Scientific American was the conference media partner and our executive editor, Fred Guterl, joined them for a conversation about the future of the technological world that we're building today. You'll also hear the voice of media entrepreneur Juliette Powell. Here's Fred.
Guterl: Danny—tell us what "the Entanglement" is.
Hillis: So, I guess, I believe that humanity is entering a fundamentally different age of how we think about the world, in the same since that the enlightenment changed our image of our relationships to the world. I think we're switching into something new. And that something new has to do with our relationship with the artifacts that we've created, the technology. So, it used to be that there was the natural world, which was very complicated and too complicated for us to really understand, and it was, sort of, mysterious. And then in enlightenment, we started believing it behaved according to a set of rules, we could embody those sets of rules in machines we built and in some sense, we could create things that were embodiments of our rationality.
Guterl: Right.
Hillis: And so the revolutions that happened, and computing was, sort of, the ultimate example of that, where we could build things that we completely understood and they were digital and they did predictable things and they were understandable.
Guterl: But you think this is an old world view now—we're switching to something else.
Hillis: Yeah. I think that was in some sense that enlightenment idea peaked out with the invention of the computer.
Guterl: And so…
Hillis: And that's the ultimate expression of that.
Guterl: And so the computer embodied us.
Hillis: We embodied rationality in the machinery or some version of it. But what's happened though, and I don't think most people realize this has happened yet, is that our technology has actually now gotten so complicated that we actually no longer do understand it in that same way. So the way that we understand our technology, the most complicated pieces of technology like the Internet for example, are almost like we understand nature; which is that we understand pieces of them, we understand some basic principles according to which they operate, in which they operate. We don't really understand in detail their emerging behaviors. And so it's perfectly capable for the Internet to do something that nobody in the world can figure out why it did it and how it did it; it happens all the time actually. We don't bother to and but many things we might not be able to.
Guterl: I mean, I've had the experience of, you know, you call the IT department because something's not working, and they just scratch their heads and they say, "You know, I just don't know why it's not working."
Hillis: Yeah, there was a great example where the Australian election system couldn't tally the books and the reason was because some site, Sun Microsystems, had gone down and somebody, you know, when they wrote the Java program, they had actually copied a piece of example code that referred to some Sun server and nobody ever noticed that this system was…
Joy: It was dynamically loading it from a machine which was down?
Hillis: Yeah. And so, you know, somebody turned off a machine in Silicon Valley and they couldn't count the votes in Australia.
Joy: That sounds pretty entangled.
Hillis: Yeah. So it's an entangled mass, and now it's just in the ability for us to be able to figure those things out usually, but for instance, you know, when Y2K came along…
Guterl: Right.
Hillis: People genuinely didn't know what it would be. People genuinely don't know what would happen if you turned off GPS or, you know, the Internet went down or something.
Guterl: Now you're the one of the ones who had said that nothing would happen in Y2K.
Hillis: I did in that case and I guess, the reason I think it's worth noting that is, I'm not a Cassandra; but realizing that, I think then you could still figure it out. But I think, since then, it's evolved to the point where you really can't figure it out.
Joy: Well, you know, RIM may have had their own Y2K thing here. I'm not sure they really knew why their system went down. I discovered this many years ago. I had lot of computers at Berkeley and I was just networking them together really for the first time, and one day they all crashed and they wouldn't come back up because there was these dependencies between. And what's happened, these systems have gotten so complicated, used to be when something crashed, you could just take a snapshot and look at it and try to figure out what was going on. But people can't root-cause diagnose why they don't work anymore because they're too complicated; and part of this is the fact that they're written in C, which is a programming language, which really isn't suitable for running large programs.
Guterl: Right, right.
Joy: And so the system doesn't have a semantics; it actually in a very deep sense the program does not have a meaning, a formal meaning; what it has is it's just whatever happens. That's kind of the way nature works too, right? I mean, we could say, we have these names for plants and animals, and they're just our own little fictions about what's going on. The truth is these things don't really exist; things are just happening. It's nice for us to be able to categorize things and try to make sense of it, but there's a deeper level at which species—that's just a convenient thing, you know, all these things, these names we put on things are not the real description of what's going on.
Hillis: So, even the idea of cause and effect, sort of, assumes that there is a kind of a contract of something, a sort of single clause or something like that. But if what things do is just what they do…
Guterl: Right. We were talking about cancer tumors, just earlier and about, you know, how, I think, it was you who was saying that, you know, the tumor grows very quickly, almost springs up all at once, and it's almost, not that it acts like an infectious thing, but it's like almost a failure of the rest of the body to prevent tumor from growing. Is this is the kind of…
Hillis: So that's yeah, and we're used to that in biology, that you don't quite understand it. Nobody would expect that you could absolutely predict how a human body is going to react to a drug or something like that. But we're not used to that in our technology. So that's what's interesting, is now where not just nature is like that, but in fact the things that we've built are like that. So, when Bill and I program computers, we now do it only recreationally. I don't…
Joy: We mostly respond to e-mails. (laughs)
Hillis: But in those days…
Guterl: It's a hell you've created, so.
Hillis: You could hold the whole thing in your head. You know, Bill's probably one of the last people that held the whole operating system in his head.
Joy: I had the whole thing under my control.
Hillis: And now there is nobody that holds the operating system in their head.
Guterl: So what does this mean?
Hillis: And some people program by putting the other things they don't quite understand and gluing them together and hoping that they, sort of, I mean, you know, kind of guessing how they'll behave and fixing it when they don't.
Guterl: So where is this going, I mean, what's this all mean?
Joy: Well if we take it to another level, which is you know, maybe another name for "the entanglement" is the Anthropocene, right. I mean our relationship to the environment is now we've become part of it, right with global warming or whatever. Or in biology, it's not "man apart"; we're in, you know, we modified the ecosystem in North America when we came here. So, this has become almost inevitable now, and these systems are too complex to understand, yet, we're screwing with them. So you know, you cross a line where you have things you don't completely understand but you have to also behave responsibly for and manage, which is, you know; it's, you know, the precautionary principle and all that are part of this, but it's worse than that because, you know, how can we know what the right thing to do is? You know that you're responsible, but you can't predict what any action is going to take, you know, what the consequence of any action you take is going to be; so you're left with what? The Hippocratic oath? But that's not going to work either because you can't do nothing, you know? It's like the laws of robotics to do nothing; you know, it's not okay to do nothing and thereby cause harm, but what you do, you don't know what's it's going to do. So it's a big responsibility that you've taken on, and we've created these systems that we depend on, but we can't, you know; especially if it's Windows in there and we can't fix it; it's 50 million lines of really crappy—it's a triumph of implementation over design. There is no design, it's just a bunch of typing, and it's a big mess and that's why all these companies got attacked is the people who wrote this, it was just sloppy, right? And these systems are, in some sense, sloppy, mucky and yet we depend on them. So now what are we going to do?
Hillis: So, I think, two things we're going to do. First of all, you have to recognize that, first of all, we have to acknowledge this is the case. So I think, right now, people assume that there's some expert someplace that actually understands it. And, you know, since we used to be the experts that actually understood it, I can tell you they don't exist anymore, okay. And so that was, you know, maybe in 2000 we still did; now we actually don't anymore. I really don't think there are any experts that, you know, there are people that understand more, bigger parts and smaller parts and so on. So people have to, sort of, get over the idea that there's an expert someplace that understands it. And then once you do that, then I think one of the things we need to do is, we need to decide if we're really depending on something, then we might need to make an alternative, lower-performance version of it that actually people do understand and that we can depend on.
Joy: But I think the other assumption we've made is that we can continue to do things the way we did them in the Iron Age and so on. Maybe we're building our society on things which are not abundant enough to be sustainable.
Guterl: You mean, like, fossil fuels, and water and fish and…
Joy: Everything, yeah. And so what we've done—biology organized itself around things that were crustally abundant, but our society is not organized that way. You know one of my favorite charts is the large scale distribution of elements versus their atomic number, and you look at some of the things that we depend on—they're not very abundant. Now, you know, there's another overlay on that, which is how much of it is concentrated by geology? Because you can be rare crustally, but relatively easy to mine to some level because the geologic process through some chemical means has concentrated it, but if you put a lot of pressure on that, you free it, you're going to run out anyway, right. So we have entangled ourselves with something that's in relatively short supply.
Guterl: Are you talking about rare earths like lithium?
Joy: That's just this tip of the iceberg, right? Oil, rare earths, fish, you know, you can go on and on. But the point is that if we're going to be sustainable, we know how to be more energy efficient and all that, but we also if we're going to give things to six, seven, how many billion people, they got to be made out of things which are more like green chemistry and nontoxic and out of crustally abundant materials. It's more likely to be based, in the future, out of a silica-based thing or carbon-based thing than a steel- or titanium-based thing because. you know, the amount of energy embodied in the steel or the relative expense which reflects the scarcity of those other things. And that's, kind of, the stuff that we've been working on, which is to find things with good properties that don't have a huge amount of embodied energy or depend on really rare materials, because we just can't—you see the volatility in the commodities. We can't solve this by simply trying to mine more.
Hillis: But I think, you know, tying it to back to the whole computer thing, the idea of having conceptual systems that really are well thought out, so they're simple and easy to understand, so you don't have all these configurational problems that happen when you start combining them all together; because then it all of a sudden becomes more like a weather pattern: very unpredictable. Maybe, you know, only predictable if use chaos theory or something like that versus something that's far simpler. But it takes a lot more time to come up with that simple solution to the problem.
Guterl: Well it's an interesting idea; so then basically the Internet has become this wild thing that we don't have any control over, that we really don't…
Hillis: We have some control over it.
Guterl: Some control, but we don't understand it really, we don't know quite what it's going to do, and then we need to build smaller systems within that that we can predict?
Hillis: Yeah, now that we have a better feeling of how it works, maybe we can come up with a far simpler way of doing it; just the same way, people are now coming up with $200 computers, you know. They understand basically what needs to go into and now let's find a cheap way to put it all together.
Joy: There's a paradox in the Internet, in that when DARPA designed it, it was supposed to be highly redundant. But the economic forces actually drive it to be less redundant because co-location has economic advantages, so you tend to get very highly connected, large nodes that drive most of the traffic. And so in my view it's far or less robust than we think, because if you took a few of these out, the thing will just overload; because 80 percent of the capacity can be taken out by relatively small number of cascading failures.
Guterl: And that would have economic consequences.
Joy: Yeah, you don't have to attack the whole thing.
Hillis: I think there's another flaw in it, which is the robustness that it was designed for, is because at the time that it got designed, nobody trusted the phone companies to keep the lines up.
Guterl: Right.
Hillis: It's very robust against lines going down—although as Bill points out maybe we've even optimized that robustness out of it—but for instance, it did assume that you could trust the (UNCLEAR 15.11) because they all knew each other, and we all trusted each other. So, the assumption was there weren't bad, malicious people on the Internet. So, you would subchannel everybody's traffic, you know, since nobody would be unfair about how much traffic they sent to you and so on. Well so that I think trust assumption is inverted now. Communication lines are very robust and reliable, but we can't…
Guterl: Upwards of 80 percent of the messages...
Hillis: There's bad guys on the network and those bad guys are happy to subvert it in anyway they can; and so if you had known that when you designed the network, it would've designed it with very, very different protocols.
Joy: It doesn't have a metabolism because there's no friction in the system, there is no feedback; so as long as the cost of injecting packets is essentially zero, then this will occur. You know, in the postal system, there's a limit to how much junk you can send because it costs you money; in nature, you know, there's generally a balance. If you overpopulate you starve or something; you know, there's some sort of something that pushes back, if not a predator or something else in the environment. But in the Internet, there's no effective limit on, the system wasn't designed with that kind of balanced thinking in it.
Hillis: You can now go back seeing how we, what the actual problem is and say, "Okay, if we're actually going to depend on this network, then maybe it has to have some different properties and maybe we don't need the, maybe the properties it has may be fine for viewing YouTube videos…
Guterl: Right.
Hillis: But that's probably not fine for calling up the fire station when your house is on fire…
Guterl: Right.
Hillis: Or you, know, doing a bank transfer and something like that.
Guterl: Right. Is this something that people are working on or is this just an idea that you had?
Hillis: Bill and I had been working on it a little bit.
Joy: Yeah, we've been working on it a little bit. But you know it's, the problem is, the economics doesn't pay for security and reliability, at least for average case; and so the system that, you know, the system right now doesn't value—and even with a traumatic event that took the whole thing down, people would forget very quickly and probably revert to the same behavior.
Hillis: People forget that it's actually happened; actually a few times when the Internet has gone down.
Guterl: Right
Hillis: Or much of it has gone down.
Joy: Like when the worm came up.
Hillis: And those weren't even malicious, those were accidents.
Guterl: What, I mean, so this is, kind of, scary really when you look at something like, say you know, Stuxnet that was able to—and that was a very targeted thing but it was a very, you know, in the wrong hands that would be a pretty scary bit of malware. You could imagine another version of malware that takes everything down and really wreaks a lot of havoc.
Hillis: Yep, we can easily imagine that, and I think there's actually; now people have responded to this by saying, "Well, cyber security is important and let's get it under control." So there's lots of effort to, I mean, we have a cyber security czar, we have a, but all of that is oriented towards, "Well, let's enforce the rules on top of this very shaky foundation we have built." Nobody is saying, "Well maybe we should, this simple, maybe we should build this simpler thing over here." That's, we used to have the ham radios for communicating when the phone system went out.
Guterl: That's right, for an emergency.
Hillis: Now the hams, to the degree they're left are running Internet protocols.
Joy: The thing is that most of the people who're doing Internet startups, I'm not sure they know how to program in the way that we programmed. And they certainly don't understand why the system is the way is it is. You know, and so Danny and I are old enough that we could still, we could design a new Internet that was nothing like the Internet. You know it wouldn't be based on the principles of, you know, datagrams and universal addresses; it would be based on something very different. And it would have different weaknesses. Right, I mean, it's like, signal and noise, you know, you got signal and noise and the total has to be you know, the total capacity of the channel is limited, right; and so we would move the signal and noise around, you know, in a way. We would waste bandwidth for simplicity and reliability. Whereas right now, you know when the AT&T people came to me 25 years ago, and they couldn't believe that anyone would ever do phone calls on the Internet; and they said, "The packets might get delayed," and I said to them, "Just send them three times." You know, whatever the probabilities, I'll send them three different ways, who cares? And that just was inconceivable—what a waste of bandwidth. But the honest truth is, it's insignificant. If you send every phone conversation on the net three times it would make no difference at all because it's just drowned out by the video streams. So the capacity exists, but we were not using that capacity in the service of simplicity and reliability, we were using it in the service of; we're just, we don't even distinguish roughly between a lot of these kinds of traffic. I mean, there's some attempt to do it, but it's really—really, it's not fundamental to how the system works.
Hillis: So, what we're thinking about is, rather than trying to prop up the current, shaky system—which actually works very well for many purposes—build a simplified system to the side, which has these other properties of robustness and uses the bandwidth differently, as Bill said, for robustness, security generally; things that the Internet doesn't do well.
Joy: We have to find a business case for building the part of it that still works when the Internet is down.
Guterl: I would think that…
Joy: When the Internet is we know it, you know.
Guterl: So, basically we need a really bad disaster that people have trouble forgetting.
Joy: No no, we need a way of our system having the wind at its back. Because if we can't find an economic force in normal times that drives our system to get larger, and it actually would have to outgrow, it has to grow, you know, faster, faster than the Internet, it has to have some other reason for being or else we'll continue to shrink relative to the incumbent, and so how will we ever have enough scale?
Guterl: Right.
Joy: You know, we have to get enough, it's like bringing up a new cellular network—it's not useful unless it's ubiquitous. So how could we now, now we can maybe use the existing Internet for some of this, but we have to cause our own thing to get deployed, right, somehow. So there has to be I don't know, even the government, I don't know if it can afford to do it, right.
Hillis: Not because it's expensive, because the government is actually going to spend way more money than it would take to do it on shoring up the existing system, and that's…
Joy: We need it to be a natural force that's present. We can't do it on planning, right. It has to be...
Guterl: So how would that happen if it's not…
Hillis: That's what we're trying to figure out.
Joy: Yeah, we have to have some; but I mean, you can see the changes, you know: Look how fast the iPad got adopted. It has to have some other thing it does that's so desirable that it brings it into the world. So we want to have these abundant things that aren't so environmentally impactful that we can take to scale. So how could they possibly make it in the world? Well, they have to be better or cheaper or some combination; they have to have some additional virtues. It's not enough for them to be virtuous, to be, you know, green and wonderful; they have to, if you want a transition to more sustainable, they got to have some other reason to get pulled to the market place. It's not enough simply for people to do it for charity. We don't have enough money.
Guterl: What other reason would that be? What could that be?
Joy: It could be better.
Hillis: Maybe there could be some very intensive application may be, you know, just like the original ARPANET setup. You might have a laboratory system that has a very high demands being placed on stability. So set up this village, where you have some kind of a test case, and then if you can show the virtues of that system, then there's going to be other people that want it. Just like when people started replacing copper with glass, you know, as a way of getting information across.
Joy: So for example, this new system could guarantee you 100 millisecond latencies possible anywhere on the globe. That's quite difficult actually because roughly the best you could do it. That's very tough.
Hillis: Let me give you another example.
Joy: Well, 200 milliseconds is possible. Let's say we can give a guarantee…
Guterl: What does that mean—a 200 millisecond latency?
Joy: Well, it means in a fifth of a second, if you give me a packet of a bounded size and you have to pay a certain amount—it's not going to be free—I guarantee to get it somewhere within that amount of time 99-point-some-large-number-of-nines percent of the time and…
Guterl: Which is more reliable than…?
Joy: Well, and if I gave you that, you could build services on top of that you can't build on the Internet. Because if that was my property that I expected to ride to solution, I would have to have an economic case for that. But I could implement a property like that, if that was the one we wanted to do.
Hillis: Well here's another example in a completely different dimension: Imagine if you had a mail system that you never got spam on.
Guterl: Ah! Yeah, that's…
Hillis: Because the reason you get spam fundamentally is because we built it on this very shaky foundation, and you can't really tell, where somethings coming from and so…
Joy: You don't get that on Facebook, right, because on Facebook, if you're, only your friends, if it's all white list, but then how do you find new people, so there's a tension…
Guterl: Right.
Joy I actually think the way to solve that is to charge people; I don't mind if people pay a dollar, and they can pay half of it to me, so every time I get a spam, I get 50 cents off the worth (laughs), which you have to pay, you have to pay to get in my inbox. And I can set that number to whatever I want.
Guterl: Interesting.
Powell: Or what if you created a side community. So everyone that's on Facebook, for example, that wanted to do micro-transactions; so instead of going through their banks or going through their geographical restrictions, were able to deal with their friends, share with their friends and do financial exchanges with their friends by cutting out the middlemen. But you couldn't do it on the Internet, you could do it on this other thing.
Joy: You can create a barter account.
Powell: Right. So in other words, all of these people would navigate towards it naturally and because all of your friends are there, you would want to…
Joy: Governments will make it illegal because it's a form of tax evasion.
Powell: Oh yeah, but that's a whole different issue.
Joy: (laughs) I know, it's true though. As long as you're not too successful, you're okay.
Guterl: So are you guys trying to save the world from the thing that you created? The monster you created?
Joy: We didn't create it, we didn't design the packet-switching architecture. That's wasn't our… I'm not sure it was the wrong idea at the time.
Hillis : But you implemented IP on the Internet.
Joy: I implemented it so it worked, maybe I'm responsible …
Hillis: You deserve part of the blame. (laughter)
Joy: The thing is the unfolding part. We can write or get some people to write the code for the idea in a very small, bounded amount of time, but the real critical thing is; you know, when Steve Jobs came back to Apple, he didn't try to attack the PC directly, he waited. Don't sit on the bridge and wait, don't wait just sit. You have to, you know, to pick your spot.
Guterl: So you guys are waiting for your..
Joy: We're not waiting, we're just… we've got to be prepared, right, and the opportunity will argue for itself. It's not like we're worried about it, right.
Guterl: Well, I have to ask you, Bill, what do you think, you wrote this story, this essay 10 or 11 years ago: Are you still worried about the future, about machines and…?
Joy: I'm more worried about biological pathogens because of the, you know, if you do epidemiological studies of how connected we are in global travels, it's pretty, no use in setting up a quarantine anymore, it's just…
Guterl: No, there's no point.
Joy: So, I think we need to invest more in public health and early monitoring and, you know, Larry Brilliant was working on some of that stuff; but we need to have monitoring and early diagnosis and rapid; Craig Venter was telling me they can now make a vaccine in seven days or something. We need the, there's that movie that just came out…
Guterl: Contagion.
Joy: Yeah, but the point is we have the technology now to sequence and manufacture vaccines fairly quickly; and ideally they wouldn't be grown in eggs or whatever, right; because what if it starts as a virus in chickens or something, and we're screwed.
Guterl: Right and also it also takes so long.
Joy: It takes so long; we want to be able to make it much more quickly. So we need to…
Hillis: One of things I think is very promising is making contagious vaccines.
Joy: That's a genie, that's close to it (laughter); fertilizer and weapon are adjacent, you know, one of those problems. (laughter)
Guterl: But you're not worried about machines, artificial intelligence? We were talking about…
Joy: I was always more worried about the biology. It was just easier to argue that there was a problem with three examples rather than one. But it's clear that, you know, we're biological; but you know, the viruses also exist in the information network; and I didn't talk about those, but those are more dangerous to our economic system; not so much to our, I suppose it threatens us, you know, if 911 goes down, but not at the same level that…
Hillis: So I think, it really does threaten us if the Internet goes down at this point.
Guterl: Our existence?
Hillis: But the interesting thing—we don't know. Well, I think you could have problem getting food into cities, you know. Whether that's true yet, I don't think we know, but will it be true within 10 years?
Joy: We don't think the Internet is reliable; despite your optimism about Y2K, Danny, you don't think the Internet is…
Hillis: No I don't think it is. I don't want my life to depend on it.
Joy: It's not going to be easy to fix.
Guterl: Well, I've heard that argument about the power grid, that the power grid is very vulnerable to attack.
Hillis: Well, the power grid is even more vulnerable than the Internet, because the power grid is starting to depend on the Internet like everything else is.
Guterl: Right. So it's like a cat chasing its tail. (laughs)
Hillis: Yeah, it is. Right.
Joy: But at least in that situation, you know, when we're reaching, we're starting to come to an inflexion point where distributing renewable energy will be as cheap or cheaper than the grid; because solar and batteries and small-scale wind will start crossing over. So I think in the third world it's already the case that, you know, it's distributed systems that are smart; because the grid doesn't exist, so why build it? Just, I mean, obviously for industry, it's one thing. I mean, you have so much industrial intensity, you can't generate power on site. I mean, to control the entire country of Germany, if you took all the wind and all of the sunlight on it, it wouldn't be enough to power the infrastructure of the country. It'll clearly cross over at some point, when you're that energy intensive; or maybe you could reduce your energy intensity. But for a big factory, you can't generate on-site its power, so. But for residences, maybe that's what you're thinking. I can generate enough, I can reduce my footprint enough, so I can generate enough power from renewables for most cases at my house—my life doesn't stop if the grid goes down.
Guterl: It seems like we need to retrofit the entire world, if we're going to support these 10 billion people or…
Hillis: I think "retrofit" suggest that you, sort of, take out the old and pull out the new. I think the new actually gets built alongside the old. So it's not like the old will go away, it's just; we still have gas lines even though we ran electric lines.
Joy: But I would say, if you look 30 or 40 years out, if we assume things like really low-cost materials, low-embodying materials, they should be a lot cheaper than the way we've done things in the past. If you look out to there and say, "Economics is going to win over some long period"; I don't care how much; at some point, somebody will tip and start investing in the stuff that's cheaper, and they'll have a business advantage for doing so. So we know that some long period of time from now, we'll be on a sustainable, low-embodied energy, crustally abundant advanced materials, advanced distributable, renewable energy; somewhere between here and there, a lot of capital equipment is going to become obsolete, because the marginal cost of production of it is so much more than the new way of doing it that the new way will put it out of business. So the phone company used to think that the phone system was going to last for 40 or 50 years or 100 years, the copper wires were going to last. But they didn't, right. They didn't have economic value for that long. So suppose we just have a bunch of devices all over the world, and somehow they're all going to talk to each other and self-organize a network, okay. The problem now is we can't trust them all, because people are going to put out devices that pretend to be other devices. And that's what Danny was saying: We used to trust the physical except for its reliability; now we—forget the reliability—we can't trust them because people are malicious. So we somehow have to have a system with a bunch of devices, some of which we can't trust, which self-organizes into a network that we can trust. So trust is almost the thing that's the hardest, right. And so you have to have, you know, self-organizing trust. The other thing is, you really have to have something which prevents flooding. In other words, if the system has no friction, anyone anywhere can interject as many packets as they want, then it's very, very vulnerable.
Hillis: That's the fundamentals weakness of the current network.
Joy: And people don't pay for anything, yet it gives you a great property, that it's very easy to add to it.
Guterl: So how do you have a distributed wireless network, that, where you provide friction?
Joy: People have to pay for, pay something; it may not be money, but they have to get something that they have to use in order to send stuff to the system.
Hillis: The reason you answered the question that way, is I think, the technical solutions to it, we see very clearly.
Guterl: I see.
Hillis: But what we don't see is the, I mean, what's the hard part is the economic underpinnings, you know: How does it get paid for? How do people adopt it? Those kinds of things. But I think it's very clear how to make a network that's trustworthy even if it has components that are not trustworthy.
Joy: So for example, you might be able to send a packet only if you've done something for the network which costs you energy and time.
Guterl: Like what would that be?
Joy: So you can send a packet only if you afforded a packet, I mean, obviously that has a startup problem because it's too much in equilibrium, but the point is you can't exceed, you can't just go completely crazy without running out of something, right?
Hillis: Well for instance, if you take the airline network, we don't have to worry about planes getting clogged with people so they can't take off, because you actually have to buy a ticket, to reserve a seat before you can get on the airplane.
Joy: You might have to reserve a spot and if too many people are trying to reserve then the price might go up. So that normally as long as the bad things aren't happening, it doesn't cost anything, but in extreme instances, it's like peak, this is what they do at the center of cities to prevent congestion. But it can, this is a network; it can be more dynamic; but then we're going to need a central authority to enforce this, which probably has to be self-organized and distributed, so it gets very technical. But the problem is that this is at the center of what you have to do, because this is the thing that causes the whole network to collapse, and it's not even in the concept of the system as it is. So the thing that's fundamental to this new network is the thing it's inconceivable in the old.
Steve: Check out the related story just out in the March issue of Scientific American called the "Shadow Web," about attempts to build that alternate Internet that can't be filtered or shut down. It's also available at http://www.ScientificAmerican.com and follow us on Twitter, where you'll get a tweet every time a new item hits our Web site. Our Twitter name is @sciam. S-C-I-A-.M For Scientific American's Science Talk, I'm Steve Mirsky. Thanks for clicking on us.
Web sites related to this episode include http://compass-summit.com and The Shadow Web