The number of smartphones, tablets and other network-connected gadgets will outnumber humans by the end of the year. Perhaps more significantly, the faster and more powerful mobile devices hitting the market annually are producing and consuming content at unprecedented levels. Global mobile data grew 70 percent in 2012, according to a recent report from Cisco, which makes a lot of the gear that runs the Internet. Yet the capacity of the world’s networking infrastructure is finite, leaving many to wonder when we will hit the upper limit, and what to do when that happens.
There are ways to boost capacity of course, such as adding cables, packing those cables with more data-carrying optical fibers and off-loading traffic onto smaller satellite networks, but these steps simply delay the inevitable. The solution is to make the infrastructure smarter. Two main components would be needed: computers and other devices that can filter their content before tossing it onto the network, along with a network that better understands what to do with this content, rather than numbly perceiving it as an endless, undifferentiated stream of bits and bytes.
To find out how these major advances could be accomplished, Scientific American recently spoke with Markus Hofmann, head of Bell Labs Research in New Jersey, the research and development arm of Alcatel–Lucent that, in its various guises, is credited with developing the transistor, the laser, the charge-coupled device and a litany of other groundbreaking 20th-century technologies. Hofmann and his team see “information networking” as the way forward, an approach that promises to extend the Internet’s capacity by raising its IQ.
[An edited transcript of the interview follows.]
How do we know we are approaching the limits of our current telecom infrastructure?
The signs are subtle, but they are there. A personal example—When I use Skype to send my parents in Germany live video of my kids playing hockey, the video sometimes freezes at the most exciting moments. In all, this doesn’t happen too often, but it happens more frequently lately—a sign that networks are becoming stressed by the amount of data they’re asked to carry.
We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit [named after former Bell Telephone Laboratories mathematician Claude Shannon], and it tells us how far we can push with today’s technologies. We are already very, very close to this limit, within a factor of two roughly. Put another way, based on our experiments in the lab, when we double the amount of network traffic we have today—something that could happen within the next four or five years—we will exceed the Shannon limit. That tells us there’s a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light. So we need to work with these limits and still find ways to continue the needed growth.
How do you keep the Internet from reaching “the limit”?
The most obvious way is to increase bandwidth by laying more fiber. Instead of having just one transatlantic fiber-optic cable, for example, you have two or five or 10. That’s the brute-force approach, but it’s very expensive—you need to dig up the ground and lay the fiber, you need multiple optical amplifiers, integrated transmitters and receivers, and so on. An alternative is to explore another dimension: spatial division multiplexing, which is all about integration. Put simply, you transmit multiple channels within a single cable. Still, boosting the existing infrastructure alone won’t be sufficient to meet growing communications needs. What’s needed is a network that no longer looks at raw data as only bits and bytes but rather as pieces of information relevant to a person using a computer or smartphone. On a given day do you want to know the temperature, wind speed and air pressure or do you simply want to know how you should dress? This is referred to as information networking.
What makes information networking different from today’s Internet?
A lot of people refer to the Internet as a “dumb” network, although I don’t like that term. What drove the Internet initially was non-real-time sharing of documents and data. The system’s biggest requirement was resiliency—it had to be able to continue operating even if one or more nodes [computers, servers and so on] stopped functioning. And the network was designed to see data simply as digital traffic, not to interpret the significance of that data.