In this era of Facebook, Twitter and the iPhone, it is easy to take for granted our ability to connect to the world. Yet communication is most critical precisely at those times when the communications infrastructure is lost. In Haiti, for example, satellite phones provided by aid agencies were the primary method of communication for days following the tragic earthquake earlier this year. But even ordinary events such as a power outage could cripple the cell phone infrastructure, turning our primary emergency contact devices into glowing paperweights.
In situations such as these, an increasingly attractive option is to create an “ad-hoc” network. Such networks form on their own wherever specially programmed mobile phones or other communications devices are in range of one another. Each device in the network acts as both transmitter and receiver and, crucially, as a relay point for all the other devices nearby. Devices that are out of range can communicate if those between them are willing to help—passing messages from one to the next like water in a bucket brigade. In other words, each node in the network functions as both a communicator for its own messages and infrastructure for the messages of others.
Disaster relief is but one potential application for ad-hoc networks. They can serve anywhere building a fixed infrastructure would be too slow, difficult or expensive. The military has invested a large amount of money in designing these systems for battlefield communications. Ad-hoc networks in your home would allow devices to find one another and begin communicating automatically, freeing you from the tangle of wires in your living room and office. Remote villages and lower-income neighborhoods that lack a broadband infrastructure could connect via ad-hoc networks to the Internet. Scientists interested in studying microenvironments in the treetops or hydrothermal vents on the ocean floor could scatter sensors in their intended environment without worrying about which sensors will hear one another or how information will travel from the jungle to the researchers’ laptops.
These networks have been in development for more than three decades, but only in the past few years have advances in network theory given rise to the first large-scale practical examples. In San Francisco, the start-up Meraki Networks connects 400,000 San Francisco residents to the Internet through their Free the Net project, which relies on ad-hoc networking technology. Bluetooth components in cell phones, computer gaming systems and laptops use ad-hoc networking techniques to enable devices to communicate without wiring or explicit configuration. And ad-hoc networks have been deployed in a variety of remote or inhospitable environments to gather scientific data from low-power wireless sensors. A number of breakthroughs must still be achieved before these networks can become commonplace, but progress is being made on several fronts.
The Cellular Network
Ad-hoc networks are still rare. To understand why they have been slow in coming, it helps to consider the differences between the newer approach and such wireless technologies as cell phones and Wi-Fi. When you use an ordinary mobile phone to call a friend, only the transmissions between each phone and its nearest cell tower are wireless. The towers are fixed in place, and communication between the towers travels through vast networks of wires and cables. Wireless local-area networks such as Wi-Fi also rely on fixed antennas and wireline communications resources.
This approach has advantages and disadvantages. Power is required to transmit information, and classic wireless networks spare the power in battery-powered devices (such as phones and laptops) by leaving as much of the communications burden as possible to a stationary infrastructure that is plugged into the power grid. Similarly, wireless bandwidth is a fixed and limited resource. Traditional wireless systems conserve bandwidth by sending most information through wires. The use of fixed infrastructure allows the construction of large, mostly reliable telephone and Wi-Fi communications resources in areas where the need is greatest.
Yet the use of fixed infrastructure makes these networks vulnerable to power outages and other central failures that can incapacitate a communications network even if individual phones and laptops in the area can still function. Ad-hoc networks, in contrast, are uniquely robust. If one mobile device runs out of power or is turned off, the remaining ones modify the network to compensate, as much as possible, for the missing constituent. The networks adjust and “heal” naturally as devices come and go.
This self-healing ability comes at a price, though. The network must send information in clever ways so that a message can be reconstructed even if some of the links between sender and receiver break during transit. The system must determine the best way to get a message to the recipient—even if the sending device has no way of knowing where the recipient is. And finally, the network must also deal with the omnipresent noise of multiple devices transmitting messages at similar times.
Delivery Strategies
The problem of how to efficiently route information through an ever changing network has been difficult to resolve for several reasons. In a traditional cell phone or other wireless network, the central wireline infrastructure keeps track of the general locations of individual devices. It can then take a message from one user and direct that message straight to its recipient.
In contrast, communications devices in ad-hoc networks have to determine on their own the best means of delivering information. The individual instruments are limited in their computing power, memory and communication, so no single individual can gather or process all the information that would be known to the central computers of traditional wireless networks.
The situation can be illustrated through the following scenario: You are in a large city—say, London—and you need to contact your friend who is at some unknown location on the other side of town. In this pretend world, the communications infrastructure is mounted to the roofs of taxicabs. The receiver on each cab has a range of less than a mile, and the taxis travel much more slowly than the speed of communication, so the taxis must work together to deliver your message. As the cabs jostle through the city, nearby receivers connect to each other, then split apart an unpredictable amount of time later. Your call must leapfrog through the city on the back of this undulating network, find your friend, then deliver its information contents.
The task is difficult even for a single message moving through a small network; the difficulty only increases as the number of devices and messages grows. For the technology to be really useful, it needs to work efficiently no matter how large or small the network becomes.
Many techniques have been developed to tackle this problem. At their core, they involve a lot of asking directions. One receiver queries its neighbors to see what devices are nearby; those receivers query their neighboring devices, and so on until your friend receives the message. Your friend’s reply can travel back along the same path, or the reply can seek out a different path. In this way, each intermediate device creates a list of available paths between you and your friend. These lists allow for a message to reach your friend even if your particular device isn’t aware of your friend’s location. Because the network is in motion, devices must constantly iterate the process of query and response to keep the menu of available paths up-to-date.
It is also useful to send information along several paths at once, thereby increasing the chance that the message gets through. The question is how redundant the system should be. At one extreme, the network can send the entire message down every path. This strategy increases the chances that the message will get through, but taking this approach with every message will quickly swamp the network. At the other extreme, we can break up the information into a stream of component chunks, then send each chunk down its own path. This method uses less of the network’s resources, but many of the bits may get lost in transit, leaving the recipient with just a partial message.
A technique called network coding offers a middle ground. It involves breaking a message into chunks, coming up with information about those chunks, and then sending that meta information down multiple pathways in such a way that the original message can be reconstructed at the recipient’s end even if some of the chunks are lost.
One aspect of network coding involves deciding how many paths to send a message through. Increasing the number of paths decreases the impact of any single path failure, although it increases the number of devices involved in a single call. This strategy spreads the load of the call across more participants, decreasing the power burden for each while increasing the amount of coordination required.
As more devices begin transmitting—whether in support of one or many conversations—the chance of interference also increases. Just as it is difficult to understand anything when too many people speak at once, it is difficult for a wireless device to recover transmitted information when other transmissions occur nearby. These problems are especially troublesome in wireless ad-hoc networks because there is no central controller working to coordinate among the participants.
Interference in wireless networks can be handled in one of two ways. The first is to avoid conflict. If transmissions are rare, the chance that messages will interfere with one another is small. For this strategy, each device breaks information into small pieces and transmits only in short bursts. Because uncoordinated neighbors are unlikely to transmit at the same time, this approach creates less interference than would result if users transmitted information in a slow, steady stream. (The most common wireless networking standard for personal computers relies on this burst approach.)
The second strategy allows two transmitters to send information to a receiver simultaneously but requires one to transmit more quietly than the other. If you speak loudly while someone else whispers, I can typically recover your message without difficulty. If I have a recording, I can then subtract out your message to recover the quieter one.
The second approach turns out to be superior for a network with just two transmitters sending messages and a third receiving them; it becomes much more problematic as the number of speakers grows. The system must somehow coordinate who should transmit at high volume and who should transmit quietly. Coordination itself requires communication; the more effort you spend on coordination, the less bandwidth you have for communication. Finding the best possible strategy remains a topic of ongoing research.
New Tools
Although ad-hoc networks are useful in a wide variety of situations, it can be difficult to determine exactly how useful they can be. Even simple questions about the limits of their performance are hard to measure. At what rate can we transmit information across them? How does this rate depend on how many devices there are in the network and the amount of interference that results? What happens when all devices in the network are moving? And what are the trade-offs between the rate at which information is transmitted, the delay associated with its delivery, and the robustness of the system?
The value of obtaining these kinds of fundamental performance limits on ad-hoc networks is enormous. The information will provide network designers with new techniques that they can incorporate into their designs and help researchers determine where the biggest gains can be had in existing networks. In addition, awareness of these limits can enable network designers to choose among competing priorities such as data rate, delay and probability of loss. For example, phone calls and teleconferences are extremely sensitive to delay. Large delays or inconsistent packet arrival rates can cause breaks or starts and stops in audio and video transmissions that make conversation difficult. Once designers understand the structure of the specific network they are working on, they can program each application to prioritize its needs—a low delay rate for telephone conversations or a low packet loss for sending essential documents.
This kind of understanding is hard to come by in ad-hoc networks because they are constantly changing. To understand the ultimate limit of the network, you can’t just measure how the network is performing now—you have to measure how the network would perform in every possible configuration.
We have taken a new approach to this problem that maps wireless ad-hoc networks onto something we understand much more clearly—ordinary wired networks. Information scientists have in our toolbox more than six decades’ worth of methods to study the flow of information in wired networks. These networks do not suffer from interference problems, and the nodes of the network do not move around. If we want to study a certain wireless network, we first model it as a wireline network that captures some central features of the wireless network’s behavior. We can then characterize the complete performance limits of the ad-hoc network using its doppelgänger as a guide.
This process helps us build better networks because we can understand the implications of our design choices. It also allows us to determine where our current approaches are doing well and where there is room for improvement.
Even with these tools, we do not expect ad-hoc networks to replace the existing cellular infrastructure. But in the unique situations where ad-hoc networks are essential, such tools will allow for a full understanding of just how powerful the network can be—exactly when it is needed most.