Here's a fun experiment: Try counting the electronic sensors surrounding you right now. There are cameras and microphones in your computer. GPS sensors and gyroscopes in your smartphone. Accelerometers in your fitness tracker. If you work in a modern office building or live in a newly renovated house, you are constantly in the presence of sensors that measure motion, temperature and humidity.

Sensors have become abundant because they have, for the most part, followed Moore's law: they just keep getting smaller, cheaper and more powerful. A few decades ago the gyroscopes and accelerometers that are now in every smartphone were bulky and expensive, limited to applications such as spacecraft and missile guidance. Meanwhile, as you might have heard, network connectivity has exploded. Thanks to progress in microelectronics design as well as management of energy and the electromagnetic spectrum, a microchip that costs less than a dollar can now link an array of sensors to a low-power wireless communications network.

The amount of information this vast network of sensors generates is staggering—almost incomprehensible. Yet most of these data are invisible to us. Today sensor data tend to be “siloed,” accessible by only one device for use in one specific application, such as controlling your thermostat or tracking the number of steps you take in a day.

Eliminate these silos, and computing and communications will change in profound ways. Once we have protocols that enable devices and applications to exchange data (several contenders exist already), sensors in anything can be made available to any application. When that happens, we will enter the long-predicted era of ubiquitous computing, which Mark Weiser envisioned in this magazine a quarter of a century ago [see “The Computer for the 21st Century”; September 1991].

We doubt the transition to ubiquitous computing will be incremental. Instead we suspect it will be a revolutionary phase shift much like the arrival of the World Wide Web. We see the beginnings of this change with smartphone applications such as Google Maps and Twitter and the huge enterprises that have emerged around them. But innovation will explode once ubiquitous sensor data become freely available across devices. The next wave of billion-dollar tech companies will be context aggregators, who will assemble the sensor information around us into a new generation of applications.

Predicting what ubiquitous computing and sensor data will mean for daily life is as difficult as predicting 30 years ago how the Internet would change the world. Fortunately, media theory can serve as a guide. In the 1960s communications theorist Marshall McLuhan spoke of electronic media, mainly television, becoming an extension of the human nervous system. If only McLuhan were around today. When sensors are everywhere—and when the information they gather can be grafted onto human perception in new ways—where do our senses stop? What will “presence” mean when we can funnel our perception freely across time, space and scale?

Visualizing Sensor Data
We perceive the world using all our senses, but we digest most digital data through tiny two-dimensional screens on mobile devices. It is no surprise, then, that we are stuck in an information bottleneck. As the amount of information about the world explodes, we find ourselves less able to remain present in that world. Yet there is a silver lining to this abundance of data, as long as we can learn to use it properly. That is why our group at the M.I.T. Media Lab has been working for years on ways to translate information gathered by networks of sensors into the language of human perception.

Just as browsers like Netscape gave us access to the mass of data contained on the Internet, so will software browsers enable us to make sense of the flood of sensor data that is on the way. So far the best tool for developing such a browser is the video game engine—the same software that lets millions of players interact with one another in vivid, ever changing three-dimensional environments. Working with the game engine Unity 3D, we have developed an application called DoppelLab that takes streams of data collected by sensors placed throughout an environment and renders the information in graphic form, overlaying it on an architectural computer-aided design (CAD) model of the building. At the Media Lab, for example, DoppelLab collects data from sensors throughout the building and displays the results on a computer screen in real time. A user looking at the screen can see the temperature in every room, or the foot traffic in any given area, or even the location of the ball on our smart Ping-Pong table.

DoppelLab can do much more than visualize data. It also gathers sounds collected by microphones scattered about the building and uses them to create a virtual sonic environment. To guarantee privacy, audio streams are obfuscated at the originating sensor device, before they are transmitted. This renders speech unintelligible while maintaining the ambience of the space and the vocal character of its occupants. DoppelLab also makes it possible to experience data recorded in the past. One can observe a moment in time from various perspectives or fast-forward to examine the data at different timescales, uncovering hidden cycles in the life of a building.

Sensor browsers such as DoppelLab have immediate commercial applications—for example, as virtual-control panels for large, sensor-equipped buildings. In the past a building manager who wanted to track down a problem in the heating system might have sorted through spreadsheets and graphs, cataloguing anomalous temperature measurements and searching for patterns that would point to the source. Using DoppelLab, that person can see the current and desired temperature in every room at once and quickly spot issues that span multiple rooms or floors. More than that, planners, designers and building occupants alike can see how the infrastructure is being used. Where do people gather and when? What effects do changes in the building have on how people interact and work within it?

But we did not make DoppelLab with commercial potential in mind. We built it to explore a bigger and more intriguing matter: the impact of ubiquitous computing on the basic meaning of presence.

Redefining Presence
When sensors and computers make it possible to virtually travel to distant environments and “be” there in real time, “here” and “now” may begin to take on new meanings. We plan to explore this shifting concept of presence with DoppelLab and with a project called the Living Observatory at Tidmarsh Farms, which aims to immerse both physical and virtual visitors in a changing natural environment.

Since 2010 a combination of public and private environmental organizations have been transforming 250 acres of cranberry bogs in southern Massachusetts into a protected coastal wetland system. The bogs, collectively called Tidmarsh Farms, are co-owned by one of our colleagues, Glorianna Davenport. Having built her career at the Media Lab on the future of documentary, Davenport is fascinated by the idea of a sensor-rich environment producing its own “documentary.” With her help, we are developing sensor networks that document ecological processes and enable people to experience the data those sensors produce. We have begun populating Tidmarsh with hundreds of wireless sensors that measure temperature, humidity, moisture, light, motion, wind, sound, tree sap flow and, in some cases, levels of various chemicals.

Efficient power management schemes will enable these sensors to live off their batteries for years. Some of the sensors will be equipped with solar cells, which will provide enough of a power boost to enable them to stream audio—the sound of the breeze, of nearby birds chirping, of raindrops falling on the surrounding leaves. Our geosciences colleagues at the University of Massachusetts Amherst are outfitting Tidmarsh with sophisticated ecological sensors, including submersible fiber-optic temperature gauges and instruments that measure dissolved oxygen levels in the water. All these data will flow to a database on our servers, which users can query and explore with a variety of applications.

Some of these applications will help ecologists view environmental data collected at the marsh. Others will be designed for the general public. For example, we are developing a DoppelLab-like browser that can be used to virtually visit Tidmarsh from any computer with an Internet connection. In this case, the backdrop is a digital rendering of the topography of the bog, filled with virtual trees and vegetation. The game engine adds noises and data collected by the sensors in the marsh. Sound from the microphone array is blended and cross-faded according to a user's virtual position; you will be able to soar above the bog and hear everything happening at once, listen closely to a small region, or swim underwater and hear sound collected by hydrophones. Virtual wind driven by real-time data collected from the site will blow through the digital trees.

The Living Observatory is more of a demonstration project than a practical prototype, but real-world applications are easy to imagine. Farmers could use a similar system to monitor sensor-laden plots, tracking the flow of moisture, pesticides, fertilizers or animals in and around their cropland. City agencies could use it to monitor the progression of storms and floods across a city while finding people in danger and getting them help. It is not a stretch to imagine using this technology in our everyday life. Many of us already look up restaurants on Yelp before going out. One day we will be able to check out a restaurant's atmosphere (is it crowded and noisy right now?) before heading across town.

Eventually this kind of remote presence could provide the next best thing to teleportation. We sometimes use DoppelLab to connect to the Media Lab while away on travel because hearing the buzz and seeing the activity brings us a little bit closer to home. In the same way, travelers could project themselves into their homes to spend time with their families while on the road.

Augmenting Our Senses
It is a safe bet that wearable devices will dominate the next wave of computing. We view this as an opportunity to create much more natural ways to interact with sensor data. Wearable computers could, in effect, become sensory prostheses.

Researchers have long experimented with wearable sensors and actuators on the body as assistive devices, mapping electrical signals from sensors to a person's existing senses in a process known as sensory substitution. Recent work suggests that neuroplasticity—the ability of our brain to physically adapt to new stimuli—may enable perceptual-level cognition of “extra sensory” stimuli delivered through our existing sensory channels. Yet there is still a huge gap between sensor network data and human sensory experience.

We believe one key to unlocking the potential of sensory prostheses will be gaining a better handle on the wearer's state of attention. Today's highest-tech wearables, such as Google Glass, tend to act as third-party agents on our shoulders, suggesting contextually relevant information to their wearer (recommending a particular movie as a wearer passes a movie theater, for example). But these suggestions come out of the blue. They are often disruptive, even annoying, in a way that our sensory systems would never be. Our sensory systems allow us to tune in and out dynamically, attending to stimuli if they demand it but otherwise focusing on the task at hand. We are conducting experiments to see if wearable computers can tap into the brain's inherent ability to focus on tasks while maintaining a preattentive connection to the environment.

Our first experiment will determine whether a wearable device in the field can pick out which of a set of audio sources a user is listening to. We would like to use this information to enable the wearer of a device to tune into the live microphones and hydrophones at Tidmarsh in much the same way that they would tune into different natural sources of sounds. Imagine concentrating on a distant island in a pond and slowly beginning to hear the faraway sounds, as if your ears were sensitive enough to extend the distance. Imagine walking along a stream and hearing sound from under the water or looking up at the trees and hearing the birdsong at the top of the canopy. This approach to delivering digital information could mark the beginning of a fluid connection between our sensory systems and networked sensor data. There will probably come a time when sensory or neural implants provide that connection; we hope these devices, and the information they provide, will fold into our existing systems of sensory processing rather than further displacing them.

Dream or Nightmare?
For many people, ourselves included, the world we have just described has the potential to be frightening. Redefining presence means changing our relationship with our surroundings and with one another. Even more concerning, ubiquitous computing has tremendous privacy implications. Yet we believe there are many ways to build safeguards into technology.

A decade ago, in one of our group's projects, Mat Laibowitz deployed 40 cameras and sensors in the Media Lab. He designed a huge lamp switch into each device so it could be easily and obviously deactivated. In today's world, there are too many cameras, microphones and other sensors scattered for any one person to deactivate—even if they do have an off switch. We will have to come up with other solutions.

One approach is to make sensors respond to context and a person's preferences. Nan-Wei Gong explored an idea of this kind when she was with our research group several years ago. She built a special key fob that emitted a wireless beacon informing nearby sensor devices of its user's personal privacy preferences. Each badge had a large button labeled “No”; on pressing the button, a user was guaranteed an interval of total privacy wherein all sensors in range were blocked from transmitting his or her data.

Any solution will have to guarantee that all the sensor nodes around a person both receive and honor such requests. Designing such a protocol presents technical and legal challenges. Yet research groups around the world are already studying various approaches to this conundrum. For example, the law could give a person ownership or control of data generated in his or her vicinity; a person could then choose to encrypt or restrict those data from entering the network. One goal of both DoppelLab and the Living Observatory is to see how these privacy implications play out in the safe space of an open research laboratory. As pitfalls and sinister implications reveal themselves, we can find solutions. And as the recent revelations from former nsa contractor Edward Snowden have shown us, transparency is critical, and threats to privacy need to be dealt with legislatively, in an open forum. Barring that, we believe that grassroots, open-source hardware and software development is the best defense against systemic invasions of privacy.

Meanwhile we will be able to start seeing what kinds of new experiences await us in a sensor-driven world. We are excited about the prospects. We think it is entirely possible to develop technologies that will fold into our surroundings and our bodies. These tools will get our noses off the smartphone screen and back into our environments. They will make us more, rather than less, present in the world around us.