Award-winning author Douglas Fox talks about his cover story in the July issue of Scientific American on "The Limits Of Intelligence," placed there by the laws of physics.
Podcast Transcription
Steve: Welcome to the Scientific American podcast, Science Talk, posted on June 17th 2011. I am Steve Mirsky. This week on the podcast:
Fox: If you multiply fatter axons by one million, and you've got one million fatter axons, you're literally pushing parts of the brain further apart.
Steve: That's Douglas Fox. He's an award-winning science journalist based in San Francisco, and he wrote the cover story for the July issue of Scientific American on why there may be limits on how intelligent human beings can become, limits that are set by the laws of physics. I spoke to him by phone on June 15th. When I listened to the recording of our talk, I know this that someone nearby Doug had apparently been doing a little home repair work, so as you listen to us talk about the brain, contemplate man the tool user, in this case a hammer and what sounds like it might be a radial arm saw.
So I always thought we had these compartmentalized brains because it was a sign of our incredible advancement and our human brains are so complex that we needed to have an individual section for this and another section for that; and then I read your article and I find out that we had to have these parts of our brain differentiated because it was just coping with problems of signal-to-noise ratio and our brains, they're really good, but they're kind of just making do the best they can.
Fox: It is. I mean, I think, sort of, the brain is working with individual parts and those parts are neurons and those parts are, you know, proteins called ion channels. There are I mean, they're really quite good in terms of having evolved by random, you know, mutation of the DNA sequences; but they do have a lot of drawbacks, individual drawbacks. the way that brains evolve and the way that wiring diagrams evolved, and the way that probably intelligence has evolved has been determined a lot by the peculiarities of these funny little components that we have to work with called neurons.
Steve: And it turns out that if you want to increase, for example, processing speed, if you want to increase the speed at which signals travel on the neurons, through the neurons, because you think that might make us more intelligent or more efficient, it really doesn't; there's a, you have to pay for that somehow, and all these payoffs for function wind up with pretty much what we have now.
Fox: Yeah, and I think I want to back up and not necessarily claim that there may well be a limit to the level of intelligence. We can talk about how you define intelligence too, later on, but there well be a limit to the level of intelligence that we, meaning, you know, life that has neurons can achieve; it doesn't necessarily mean that humans are at that level or that, you know, highest level possible. and I think it's really easy to, sort of, look at ourselves and say, "Gee, we're really smart." It's also easy to look at ourselves and say, "Gee, we're really dumb too." But, you know, it doesn't necessarily follow that humans are at that level, but there could be a level, you know, some theoretical maximum that's not too far off. If you think about something, for example, faster communication amongst neurons in the brain; basically neurons, you know, the brain computes by having neurons talk to each other. One neuron or a whole thousand neurons send one neuron information, and that neuron based on that input from its thousand or 10,000 or so synapses coming into it determines whether to fire or not. Maybe it fires, maybe it doesn't; sends signals on to another neuron as do millions of other neurons, and this goes on and on and you have thinking and intelligence. And, you know, the question is, can you fit in more of this within a certain amount of time? And one way you could do that, for example, is by having fatter axons. You know, whether the axon is myelinated or whether it's unmyelinated, either way, if you have a fatter, you know, have a thicker wire, the signals are going to travel more quickly. But the problem with that is that if you make the axon bigger, it takes up more room, and this sounds really trivial but a couple of things happen. One is the thicker axon, if it is unmyelinated in particular, it takes a lot more energy because you're having to flip a lot of more ion channels open; you're having to put a lot more ions into the neuron. And then when the pulses, when the action potential is done passing through, you have to pump those ions back out of the neuron again. Doing that costs energy, it costs ATP molecules. So the more ions you pump out of that bigger, fatter axon, the more ATP, aka energy aka Snickers bars and bison brains, that we consume. That's one thing, and the other thing is that fatter axons, because they take more room, if you multiply fatter axon by one million and you've got one million fatter axons, you're literally pushing parts of the brain further apart. And therefore you're actually increasing the amount of distance the signals have to travel, and so potentially you're actually neutralizing some of that benefit in speed by having the faster axons because parts of the brain are further apart. So you make compromises; you say, okay maybe some of these signals are not as urgent, so maybe we can actually let them, even though the brain's gotten bigger, we can have the axon be the same width, we can have the signal travel the same speed and just take longer to get there. So then you're only making some of the axons fatter. But the point is that there are these tradeoffs, and ultimately you're taking up more space, and you're using more energy to accomplish these greater capabilities.
Steve: Yeah, we have this illustration in the article that's called "Why We Probably Cannot Get Much Smarter" and it reminds me, you know the movie Blade Runner?
Fox: (laughs) I think I played the video game Blade Runner when I was a kid. I don't know if I saw the movie. ( laughs)
Steve: Okay, well there's a scene in Blade Runner, where one of the replicants played by Rutger Hauer, is talking to the man who created him and he keeps asking, "Well what if we do this?" because the replicants have a very short lifespan, and he keeps asking his creator, "Did you try this?" and the creator says, "Yes, we tried that, but if you do that then this happens, and you're back where you began." So we have this chart and you just discussed, you know, the problems that you wind up with if you have fatter axons. Why don't we go through the other three possibilities here. And one is you increase the brain size—just by enlarging the brain size, you add a lot more neurons that could increase processing capacity, but it's not that easy. So if you just increase the brain size, I mean, forget even about the issues of passing a larger brain through a birth canal for a bipedal organism like we are.
Fox: That's boring stuff, yeah.
Steve: Right, that's right, that's just macro stuff. So you increase the brain size and what kind of problems do you have?
Fox: Probably a couple of things. One is that simply the bigger brains take more energy. You know, right now, and it isn't a really easy and intuitive aspect to talk about, right now, our brains in a resting human are consuming about 20 percent of the calories that we eat, 20 percent of the oxygen that we breathe in. Is it possible to have a human being, you know, have to have an organism in which the brain is consuming 30, 40 or 50 percent of its calories?
Steve: Well, you do have that, yeah? Babies.
Fox: Yeah, right exactly, and babies, you know, if you put a baby on the African savanna, you have these carnivorous animals and large game running around, you're not going to do very well. And I think, you know, that kind of gets to the point of it. I mean, babies is a facetious way to talk about it, but I think, as you start having, you know, a lot more, a greater and greater percentage of calories to the brain, you kind of end up with, sort of, a ecologically less and less plausible organism. I mean, could we give 30 percent of our calories to the brain? Sure, okay, larger brain probably our bodies are larger. So we're having, instead of taking in 2,000 calories a day, maybe we're having to take in 4,000 calories per day, just to keep him from starving, and 30 percent of those calories don't go to the muscles that allow us to run around and chase down big game and that sort of thing. What about 40 percent of calories? At some point, you wonder whether this is, whether you're actually eating away, sort of, eroding away other capabilities of the body or the organism that are needed ultimately to survive in a world that can be dangerous and violent.
Steve: Not to mention that the neurons get longer, so it takes longer for things, messages to travel, and everything kind of slows down. I mean, we're talking about microseconds and yet microseconds are important when you're engaging in neuromuscular activities.
Fox: Exactly and that's the other thing. I mean, with a brain, I think people who study artificial neural networks, you know, know this that there's a certain level of connectivity that you need to have to have, sort of, robust coding, information coding capacity and it's not all that well characterized, but essentially what we're saying is that if you have, say, a brain with a thousand neurons, you can't just have each neuron talking to three other neurons. You're not well enough connected, you maybe more likely need to have each neuron maybe talking to 50 percent of your other neurons, for example. But if the brain gets larger, you're adding more and more neurons. That means, that's more and more, sort of, essentially Facebook friends that each neuron has to keep up with, and what that really means is that the neuron has to make physical connections with all those other neurons, and it literally makes each brain cell bigger. So, if you double the size of the brain, you're not necessarily doubling the number of neurons. Now there are variations in this, you know, between say humans and rodents and other animals, but as you double the size of the brain, the neurons themselves are actually have to get bigger. And this is where you start getting into, sort of, these back bins, as Mark Changizi has said, these back bins that the brain goes through to solve this connectivity problem, where you say, "Okay, okay, we're now at a hundred billion neurons, we can't have every neuron in touch with every other neuron personally." That would be like having Barack Obama talking to every single federal employee, which wouldn't work very well. So we're going to compartmentalize. We'll have Barack Obama, or aka Barack Obama neuron, talk to this, an individual touch contact with a hundred other employees; each of those hundred others are in touch with may be a hundred other employees. And by having compartmentalization, the neurons don't have to be as big, they don't have to talk to as many other people. But at the same time, you could say, any given federal employee or any given neuron, maybe only has, you know, x, you know five numbers or five phone calls to make to go through intermediaries to talk to Barack Obama neuron, if that makes sense.
Steve: Sure, I mean, you're creating a phone tree and therefore you decentralize the information, and you have a more efficient system.
Fox: And I should say, that was a very hierarchical one with Barack Obama neuron on top, but brains are not really hierarchical in that way. But nonetheless it's the same thing happening.
Steve: And while we're on the subject of bigger brains, you address something that I've often wondered with, you know, in a naïve assumption that bigger brains always mean more intelligence; you know, how come whales aren't super smart? I mean, we can't really communicate directly with them, so we don’t know for sure, but it doesn't appear that way. Or to take, you know, a land mammal example, elephants. Their brains are much, much bigger than ours, and they're obviously incredibly intelligent animals, but you know, they're not intelligent in the same way that we are, and you address that early in the article.
Fox: It is actually the way, the comparison I like, for example, might be, you know, going from a mouse to a cow. And the mouse is you know, kind of smart, you know, it can do mazes and all that and it can get out of its cage, if you let it find a way. But the cow has a brain, which is you know, maybe several hundred times larger, and it's not several hundred times smarter. It's probably not even several times smarter. In fact, maybe it's not even any smarter; in fact, maybe it's even kind of dumb, dumber. And I think, you know, this is something that people were really interested in the late 19th and early 20th century, which is what's your relationship between intelligence and brain size. And they actually found that there's actually much more particular, much more specific relation between body size and brain size; that as the body gets larger, the brain gets larger, according to a three-quarter power law, which is that the brain's getting larger almost as quickly as the body but not quite as quickly. And you have to wonder, what's the relationship here? And probably I mean, this is something that people speculated about for a long time, and we don't really know. One of the things that people think is perhaps the brains are getting larger, just simply to take care of a lot of the, sort of, neural, boring housekeeping drudgery work that brains have to do in order to maintain bodies; things like controlling the digestive tract, monitoring all of those touch receptors and ouch receptors, and "Hey, that burnt me" receptors on the skin. The more skin area you have, the more of those receptors you probably have. Controlling more muscle fibers—literally you may have more muscles as the organisms get bigger, but you almost certainly have more fibers within those muscles that have to be innervated, and so literally controlling more of those individual fibers; monitoring a larger retina for information coming in. And all these things eat up resources but they don't necessarily make you smarter.
Steve: One of the really interesting things that you talk about in the article is the, kind of, signal to noise issue that involves the ion channels, when you try to pack more neurons into the existing space.
Fox: Sure. I think this is actually, this is the thing that first got me interested in, sort of, what eventually became this story for Scientific American. The idea that, okay, so we don't want to necessarily make brains a whole lot bigger in order to try to get smarter, but what if we pack more neurons in, right? More neurons is more, sort of, parallel processing units in the brain, so therefore, we can theoretically process more information. So we pack them in tighter, into that, you know, fixed volume by making them smaller. Okay, that's great, we do that. There is a researcher, Simon Laughlin at the University of Cambridge in the UK, whose looked into the question of, How small can neurons get? And actually, he's looked into more specifically, How small can axons get? The axons are the telegraph wires connecting neurons. And what he's found is that when the axons gets smaller, below a certain threshold they start to get, kind of, a bit more noisy. And this is more through modeling studies but more or less agrees with the electrophysiological data that's come out of real, living neurons. They found that you have a neuron below a certain size—normally the axon is sending out pulses only when the neuron itself gets a certain threshold of input coming into from other neurons—but when the axon gets below a certain threshold of size, sometimes it just sends out, maybe there's no information coming into the neuron, but the axon will just send up kind of a hiccup, a spontaneous action potential. It's like your telephone, you're not picking up the telephone to call your friend, but your telephone calls your friend anyway. And your friends answer it and says, "Hello, hello, hello?" and there's no one there.
Steve: The infamous pocket call that we now experience.
Fox: Pocket, otherwise known as butt call, yeah with a (noises) kind of in the background, exactly. And the neural version of this pocket-butt call is that these ion channels that are in the neuron, that are in the axon, all the way down the axon that, kind of, flip open and let ions pass into the neuron, those are a little bit unreliable. They're a little bit, they have a random element to them, and that is they're such small little machines, that just thermal fluctuations can cause them to pop open. So normally they're only popping open when there's actually a signal, sort of you know, pulsing down the axon. But occasionally, you know, you're always, you know, normally you get a whole bunch of them open when a signal pulses down the axon, but occasionally the individual ones are opening on their own, just due to thermal vibrations. Now this is okay, if you have thousands of ion channels in the axon and only a few at any given time are, sort of, opening, kind of you know, burping open accidentally. But the problem is if the axon gets down to a very small size, where there's such a, say, in a certain area of axon; you've only got maybe 10 of them, one of them burps open, that's enough ions in order to actually change the potential, the local potential of the membrane there, it can set off an entire action potential.
Steve: And this is not good.
Fox: Yeah, this is not good. I mean, and another analogy to it, let's say you have an election, you have 100 million people; you know that some of them are dumb people and they're going to vote for stupid things. That's okay because you've got 100 million people, you've got a lot of smart people in there. Well what if you have 10 people, three of them are not paying attention, not smart; well you could swing an election, so to speak, and that's what's happening; you've got individual ion channels opening accidentally, and that's essentially swinging the election inside the axon, saying, "Okay, let's fire right now."
Steve: At the end, you go to a place that, it kind of feels you're going to go to throughout the article, and that is the Internet as this external kind of collective brain that takes care of some of these problems, because we've taken all the information out of our limited bodies.
Fox: I know it's really stylish to talk about the Internet, but I do think this is onto something. I mean, here's what I see. I mean when one thinks about, you know, whether there are limits to human intelligence, human cognitive ability, and whether these limits are based on, sort of you know, the energy consumption of our brain, they're not going to be hard limits, you know. It's not like the speed of light where we get up to certain point and say, "Okay, that's the limit. Stephen Hawking is the limit." It's not going to be like that. It's going to be where, you're sort of at the point where you're investing as a species and as an organism so much energy and resource into your brain, that as you put even more in and perhaps get a little smarter, you're getting smarter, but you're also starting to detract from other aspects that make you something that's able to survive, you know, your ability to move around and respond to physiological challenges. So it just gets harder and harder to go, you know, past a certain threshold of intelligence. It's sort of like holding your finger, or your hand over a flame—there's no limit to how long you can hold your hand over that barbecue grill, but it just gets to hurt more and more; so where do you pull your hand off, or where do you stop getting smarter? Well, there's another thing that probably kicks in, which is there may be mechanisms that, sort of, remove some of the burden of being smart off from the brain; essentially remove, sort of, decrease the impetus for evolving individually smarter brains. And a lot of this was probably, you know, collective intelligence, and this is something, you know, with the honeybee. Okay, it's only got about a million neurons, way smaller than us, but you can see that it probably achieved a level of behavioral variability, and you could call it intelligence, but they could never have achieved it otherwise except by living in hives, where different bees take on different roles. And one bee could never have mastered all this, but it's through collective, sort of, collective intelligence they can accomplish this. With humans, probably, you know, it's plausible that something similar has happened and probably that would have to do with, you know, having evolved language and having, you know, both spoken language and written language. So that, you know, there's only so much we can learn in our lifespan, for example, but now we can actually transmit that. We can transmit that vertically, from generation to generation so that each person doesn't have to have that experience of sitting under the apple tree, you know, apocryphal experience of having an apple fall and saying, "Ah, gravity!" So we can actually just teach that in the textbook really quickly and efficiently so that you can get that down and get onto, you know, more advanced concepts. And this is one example. And of course, specialization in the sense, you know, that people like Jared Diamond have talked about, where, you know, there are people who are producing the food, so the rest of us can, you know, some of us can spend time making jet engines and guns out of steel and composing sonnets; and that degree of specialization, sort of, laterally too. And I think all of these things have probably, where no one brain has to know everything, has to be good at everything. I think those things, I think, it's you know, it is possible to say that these things have probably decreased whatever evolutionary impetus there might have been to evolve, get even smarter brains, smarter humans; these things might have actually decreased that. It might be that evolutionarily speaking, you get bigger bang out of a buck by investing that and improved, sort of, culture, and improved collective intelligence. And you could even think about it in the sense that—and this is not an area that I've, you know, investigated as a journalist—but in a sense that we devote probably a lot of our brain to communication, to processing things that we need to communicate, like understanding speech and learning to read and write. Well what if we devoted those brain areas to something else? Or maybe we did, you know, maybe these brain areas were formerly performing other functions, and if we gave up some of those functions or if we, kind of, marginally decreased our ability at some of those functions that those brain areas were doing, in order to say, "Hey brain area, instead of doing this, I want you to do speech now instead; I want you to do written language instead"; it may be that whatever cost there was associated with losing some of those other abilities, or you know, marginally decreasing them, it was more than paid back by having better communication and therefore better culture and being able to read books and get, you know, know way more at the age of 12 than just about anyone knew a thousand years ago.
Steve: It's a fascinating subject to have a couple of beers over and muse about, but the…
Fox: Martinis even.
Steve: Martinis right. It's that fascinating. But most of the article that the overwhelming majority of the article is about the actual physical limits that we've been talking about. The article is called "The Limits of Intelligence". It's in the July issue of Scientific American. Douglas Fox, thanks very much.
Fox: Thank you.
Steve: Well that's it for this episode. Till next time, get your science news at our Web site, www.ScientificAmerican.com, where you can check out our section on Citizen Science. These are scientific projects that need you to chip in to do some of the research. For example, you can help entomologists get a better handle on ladybug distribution across the continent by taking part in the Lost Ladybug project. Just look for the Citizen Sciencesection on our home page, and follow us on Twitter, where you'll get a tweet each time a new article hits the Web site. Our Twitter name is @SciAm. S-C-I-A-M. For Science Talk, I am Steve Mirsky. Thanks for clicking on us.