Revolutions often spring from the simplest of ideas. When a young inventor named Steve Jobs wanted to provide computing power to “people who have no computer experience and don’t particularly care to gain any,” he ushered us from the cumbersome technology of mainframes and command-line prompts to the breezy advances of the Macintosh and iPhone. His idea helped to forever change our relationship with technology.

What other simple but revolutionary ideas are out there in the labs, waiting for the right moment to make it big? We have found 10, and in the following pages we explain what they are and how they might shake things up: Computers that work like minds. Batteries you can top off at the pump. A crystal ball made from data. Consider this collection our salute to the power of a simple idea. 

The Forever Health Monitor
Your smartphone can monitor your vital signs in real time, alerting you to the first sign of trouble

Most people head to their doctors if they have chest pain or a suspicious lump, but signs like these often appear too late. Catching symptoms earlier requires ongoing monitoring—the kind of thing a cell phone might do. Health-scanning systems that exploit the continuous flow of data from cell phones could help eliminate the perilous lag time between the onset of symptoms and diagnosis. Mobile devices could also help care providers identify and treat problems before they become too serious—and too expensive—to address effectively. In theory, such always-on warning systems could slash the 75 percent of health care spending used for chronic disease management and extend life spans by staving off millions of potential health crises.

The mobile marketplace is glutted with health apps that are little more than gimmicks, but a few standout systems promise to help users manage chronic conditions or identify red-flag symptoms. AliveCor’s iPhone ECG, a plastic phone case that is slated for U.S. Food and Drug Administration approval in early 2012, has two metal electrodes on the back of the case that record heart rhythms whenever users hold the device in both hands or press it against their chest. This real-time electrocardiography (ECG) data can be beamed wirelessly to patients, family members and doctors, alerting them to any heart rhythm irregularities. “It doesn’t just give people an early warning but also gives it without the cost associated with conventional ECG tools,” says the device’s developer, biomedical engineer David Albert. Similarly, French company Withings has developed a blood pressure–monitoring device that works with the iPhone. After users don the sleek white cuff, a reading pops up on the phone’s screen within 30 seconds; if the reading is abnormal, a warning also appears. And WellDoc’s FDA-approved diabetes application, DiabetesManager, allows patients to enter a variety of real-time data into their phones, such as blood glucose levels, carbohydrates consumed and diabetes medicines taken. The software analyzes all these factors and supplies patients with a recommended action to keep sugar levels in a healthy range (take insulin, eat something). A trial published in September showed that DiabetesManager users have significantly better long-term glucose control than nonusers.

So far the new systems are largely disjointed from one another, and many remain in development. Yet wireless health experts say they represent the beginning of an era when mobile health-monitoring systems will work seamlessly and in concert, giving consumers and their doctors a comprehensive, data-fueled picture of their overall health. “It’s technically possible to press a button [on your phone] and say, ‘I want to look at my vital signs in real time,’” says Eric Topol, director of the Scripps Translational Science Institute.

The big roadblock is sensor technology. Traditional blood glucose monitors must pierce the skin to work, and few people want to wear a blood pressure cuff or a taped-on electrode everywhere they go. But more convenient alternatives are imminent. Scientists in Japan recently created injectable fluorescent fibers that monitor blood glucose. Topol says a future array of nano­particle-based sensors that interface with smartphones could achieve more reliable monitoring for vital signs and, most enticingly, earlier detection of disease markers such as antibodies. Sensors that can detect so-called tumor markers, for instance, could send immediate alerts to mobile devices, giving patients the option to start preventive che­mo­therapy before cancerous cells can get entrenched. Moreover, the simpler mobile health monitoring becomes, the more likely consumers will be to sign up. A 2010 survey found that 40 percent of Americans would pay a monthly subscription fee for a mobile device that would send blood pressure, blood glucose or heart rate data to their doctors.

Paul Sonnier, a vice president at the Wireless-Life Sciences Alliance, points out that resolving health issues early on will be even easier when mobile health monitoring is integrated with genetic analysis. If a patient has a gene that predisposes her to diabetes or cancer early in life, for example, she could potentially wear an unobtrusive sensor that sends word of any unusual developments to her phone. “You’d have an embedded nanosensor to be ahead of the first attack on the islet cells of the pancreas, the first cancerous cell that shows up,” Topol says. Should mobile health-monitoring systems reach their potential, they will serve as ever present sentinels that protect people before they know they’re in danger. —Elizabeth Svoboda

A Chip That Thinks Like a Brain
Neural computers will excel at all the tasks that make regular machines choke

Dharmendra S. Modha is probably the only microchip architect on the planet whose team includes a psychiatrist—and it’s not for keeping his engineers sane. Rather his collaborators, a consortium of five universities and as many IBM labs, are working on a microchip modeled after neurons.

They call their research “cognitive computing,” and its first products, two microchips each made of 256 artificial neurons, were unveiled in August. Right now all they can do is beat visitors at Pong or navigate a simple maze. The ultimate goal, though, is ambitious: to put the neural computing power of the human brain in a small package of silicon. The program, SyNAPSE, which is funded by the U.S. Defense Advanced Research Projects Agency, is building a microprocessor with 10 billion neurons and 100 trillion synapses, roughly equivalent in scale to one hemisphere of the human brain. They expect it to be no bigger than two liters in volume and to consume as much electricity as 10 100-watt lightbulbs.

Despite appearances, Modha insists he is not trying to create a brain. Instead his team is trying to create an alternative to the architecture common to nearly every computer constructed since its invention. Ordinary chips must pass instructions and data through a single, narrow channel, which limits their top speed. In Modha’s alternative, each artificial neuron will have its own channel, baking in massively parallel processing capabilities from the beginning. “What we are building is a universal substrate, a platform technology, which can serve as the basis for a wide array of applications,” Modha says.

If successful, this approach would be the culmination of 30 years of work on simulated neural networks, says Don Edwards, a neuroscientist at Georgia State University. Even IBM’s competitors are impressed. “Neuromorphic processing offers the potential for solving problems that are difficult—some would say impossible—to address through conventional system designs,” says Barry Bolding, vice president of Cray, headquartered in Seattle.

Modha emphasizes that cognitive-computing architectures will not replace conventional computers but complement them, preprocessing information from the noisy real world and transforming it into symbols that conventional computers are comfortable with. For example, Modha’s chips would excel at pattern recognition, like picking a face out of a crowd, then sending the person’s identity to a conventional computer.

If it all sounds a little too much like the rise of the machines, perhaps it is small comfort that these chips would be bad at mathematics. “Just like a brain is inefficient to represent on today’s computers, the very fast addition and subtraction that conventional computers are good at is very inefficient on a brainlike network. Neither can replace the other,” Modha says. —Christopher Mims

The Wallet in Your Skin
Forget cell-phone payment systems—just wave your hand to charge it

When students in Pinellas County schools fill up their lunch trays in the cafeteria and walk over to the cash registers, they just wave their hands and move on to have lunch with their friends. Schools in this Florida county have installed square-inch sensors at the registers that identify each student by the pattern of veins in his or her palm. Buying lunch involves no cards or cash. Their hands are the only wallets they need.

The Fujitsu PalmSecure system they are using allows these young people to get through the line quickly—wait times have been cut in half since the program started—an important consideration in a school where lunch is only 30 minutes long. The same technology is used by Carolinas Healthcare System, an organization that operates more than 30 hospitals, to identify 1.8 million patients, whether or not they are conscious. It is also used as additional authentication for transactions at Japan’s Bank of Tokyo-Mitsubishi UFJ.

Many physical characteristics can allow a machine to identify an individual, but only a few of them are both unique and accessible enough to be this straightforward to use. Fingerprints and faces are not as unique as we have been led to believe and can result in false positives. They are also easy to fake. Although irises are unique, capturing them requires someone to peer into a reading device and stare unblinking for several seconds, which is easy to flub and feels intrusive. The three-dimensional configuration of veins in the hand varies highly from person to person and is easy to read with harmless near-infrared light. So why are we still paying for everything with credit cards?

The only barrier to such a “digital wallet” is that banks and technology firms are slow to adopt it, says security guru Bruce Schneier. “All a credit card is, is a pointer to a database,” Schneier says. “It’s in a convenient rectangular form, but it doesn’t have to be. The barriers to entry are not security-based, because security is a minor consideration.”

Once a large retailer or government agency implements such a system—imagine gaining access to the subway with just a high five—it has the potential to become ubiquitous. The financial industry already handles substantial amounts of fraud and false positives, and switching to biometrics is not likely to change that burden. It will make purchasing as simple as waving your hand. —Christopher Mims

Computers That Don’t Freeze Up
People have to manage their own time. Why can’t our machines do the same? New software will keep them humming

Jim Holt’s smartphone is not all that smart. It has a mapping application he uses to find restaurants, but when he’s finished searching, the app continues to draw so much power and memory that he can’t even do a simple thing like send a text message, complains Holt, an engineer at Freescale Semiconductor.

Holt’s phone highlights a general problem with computing systems today: one part of the system does not know what the other is doing. Each program gobbles what resources it can, and the operating system is too stupid to realize that the one app the user cares about at the moment is getting squeezed out. This issue plagues not only smartphones but personal computers and supercomputers, and it will keep getting worse as more machines rely on multicore processors. Unless the various components of a computer learn to communicate their availabilities and needs to one another, the future of computing may not be able to live up to its glorious past.

Holt and his collaborators in Project Angstrom, a Massachusetts Institute of Technology–led research consortium, have come up with an answer: the “self-aware” computer. In conventional computers, the hardware, software and operating system (the go-between for hardware and software) cannot easily tell what the other components are doing, even though they are all running inside the same machine. An operating system, for example, does not know if a video-player application is struggling, even though someone watching the video would certainly notice the jerky picture.

Last year an M.I.T. team released Application Heartbeats, research software that monitors how all the different applications are faring. It can tell, for instance, that video software is running at a pokey 15 frames per second, not an optimal 30.

The idea is to eventually make operating systems that can detect when applications are running unacceptably slowly and consider potential solutions. If the computer had a full battery, perhaps the operating system would direct more computing power to the app. If not, maybe the operating system would tell the application to use a lower-quality but more efficient set of instructions. The operating system would learn from experience, so it might fix the problem faster the second time around. And a self-aware computer would be able to juggle complex goals such as “run these three programs but give priority to the first one” and “save energy as much as possible, as long as it doesn’t interfere with this movie I’m trying to watch.”

The next step is to design a follow-on operating system that can tailor the resources going to any one program. If video were running slowly, the operating system would allocate more power to it. If it was running at 40 frames a second, however, the computer might shunt power elsewhere because movies do not look better to the human eye at 40 frames per second than they do at 30. “We’re able to save 40 percent of power over standard practice today,” says Henry Hoffmann, a doctoral student in computer science at M.I.T. who is working on the software.

Self-aware systems will not only make computers smarter, they could prove essential for managing ever more complex computers in the future, says Anant Agarwal, the project’s lead scientist. Over the past decade computer engineers have added more and more basic computing units, called cores, to computers. Today’s computers have two to four cores, but future machines will use anywhere from dozens to thousands of cores. That would make the task of splitting up computational tasks among the cores, which programmers now do explicitly, nearly impossible. A self-aware system will take that burden off the programmer, adjusting the program’s core use automatically.

Being able to handle so many cores may bring about a whole new level of computing speed, paving the way for a continuation of the trends toward ever faster machines. “As we have very large numbers of cores, we have to have some level of self-aware systems,” says John Villasenor, a professor of electrical engineering at the University of California, Los Angeles, who is not involved in Project Angstrom. “I think you’ll see some elements of this in the next couple of years.” —Francie Diep

Currency without Borders
The world’s first digital currency cuts out the middleman and keeps users anonymous

Imagine if you were to walk into a deli, order a club sandwich, throw some dollar bills down and have the cashier say to you, “That’s great. All I need now is your name, billing address, telephone number, mother’s maiden name, and bank account number.” Most customers would balk at these demands, and yet this is precisely how everyone pays for goods and services over the Internet.

There is no currency on the Web that is as straightforward and anonymous as the dollar bill. Instead we rely on financial surrogates such as credit-card companies to handle our transactions (which pocket a percentage of the sale, as well as your personal information). That could change with the rise of Bitcoin, an all-digital currency that is as liquid and anonymous as cash. It’s “as if you were taking a dollar bill, squishing it into your computer and sending it out over the Internet,” says Gavin Andresen, one of the leaders of the Bitcoin network.

Bitcoins are bits—strings of code that can be transferred from one user to another over a peer-to-peer network. Where­as most strings of bits can be copied ad infinitum (a property that would render any currency worthless), users can spend a Bitcoin only once. Strong cryptography protects Bitcoins against would-be thieves, and the peer-to-peer network eliminates the need for a central gatekeeper such as Visa or PayPal. The system puts power in the hands of the users, not financial middlemen.

Bitcoin borrows concepts from well-known cryptography programs. The software assigns every Bitcoin user two unique codes: a private key that is hidden on the user’s computer and a public address that everyone can see. The key and the address are mathematically linked, but figuring out someone’s key from his or her address is practically impossible. If I own 50 Bitcoins and want to transfer them to a friend, the software combines my key with my friend’s address. Other people on the network use the relation between my public address and private key to verify that I own the Bitcoins that I want to spend, then transfer those Bitcoins using a code-breaking algorithm. The first computer to complete the calculations is awarded a few Bitcoins now and then, which recruits a diverse collective of users to maintain the system.

The first reported Bitcoin purchase was pizza sold for 10,000 Bitcoins in early 2010. Since then, exchange rates between Bitcoin and the U.S. dollar have bounced all over the scale like notes in a jazz solo. Because of the currency’s volatility, only the rare online merchant will accept payment in Bitcoins. At this point, the Bitcoin community is small but especially enthusiastic—just like the early adopters of the ­Internet. —Morgen Peck

Microbe Miners
Bacteria extract metals and clean up the mess afterward

Mining hasn’t changed much since the Bronze Age: to extract valuable metal from an ore, apply heat and a chemical agent such as charcoal. But this technique requires a lot of energy, which means that it is too expensive for ores with lower metal concentrations.

Miners are increasingly turning to bacteria that can extract metals from such low-grade ores, cheaply and at ambient temperatures. Using the bacteria, a mining firm can extract up to 85 percent of a metal from ores with a metal concentration of less than 1 percent by simply seeding a waste heap with microbes and irrigating it with diluted acid. Inside the heap Acidithiobacillus or Leptospirillum bacteria oxidize iron and sulfur for energy. As they eat, they generate reactive ferric iron and sulfuric acid, which degrade rocky materials and free the valued metal.

Biological techniques are also being used to clean up acidic runoff from old mines, extracting a few last precious bits of metal in the process. Bacteria such as Desulfovibrio and Desulfotomaculum neu­-tralize acids and create sulfides that bond to copper, nickel and other metals, pulling them out of solution.

Biomining has seen unprecedented growth in recent years as a result of the in-­­
creasing scarcity of high-grade ores. Nearly 20 percent of the world’s copper comes from biomining, and production has doubled since the mid-1990s, estimates mining consultant Corale Brierley. “What min­ing companies used to throw away is what we call ore today,” Brierley says.

The next step is unleashing bacterial janitors on mine waste. David Barrie Johnson, who researches biological solutions to acid mine drainage at Bangor University in Wales, estimates that it will take 20 years before bacterial mine cleanup will pay for itself. “As the world moves on to a less carbon-dependent society, we have to look for ways of doing things that are less energy-demanding and more natural,” Johnson says. “That’s the long-term objective, and things are starting to move nicely in that direction.” —Sarah Fecht

Crops That Don’t Need Replanting
Year-round crops can stabilize the soil and increase yields. They may even fight climate change

Before agriculture, most of the planet was covered with plants that lived year after year. These perennials were gradually replaced by food crops that have to be replanted every year. Now scientists are contemplating reversing this shift by creating perennial versions of familiar crops such as corn and wheat. If they are successful, yields on farmland in some of the world’s most desperately poor places could soar. The plants might also soak up some of the excess carbon in the earth’s atmosphere.

Agricultural scientists have dreamed of replacing annuals with equivalent perennials for decades, but the genetic technology needed to make it happen has appeared only in the past 10 or 15 years, says agroecologist Jerry Glover. Perennials have numerous advantages over crops that must be replanted every year: their deep roots prevent erosion, which helps soil hold onto critical minerals such as phosphorus, and they require less fertilizer and water than annuals do. Whereas conventionally grown monocrops are a source of atmospheric carbon, land planted with perennials does not require tilling, turning it into a carbon sink.

Farmers in Malawi are already getting radically higher yields by planting rows of perennial pigeon peas between rows of their usual staple, corn. The peas are a much needed source of protein for subsistence farmers, but the legumes also increase soil water retention and double soil carbon and nitrogen content without reducing the yield of the primary crop on a given plot of land.

Taking perennials to the next level—adopting them on the scale of conventional crops—will require a significant scientific effort, however. Ed Buckler, a plant geneticist at Cornell University who plans to develop a perennial version of corn, thinks it will take five years to identify the genes responsible for the trait and another decade to breed a viable strain. “Even using the highest-technology approaches available, you’re talking almost certainly 20 years from now for perennial maize,” Glover says.

Scientists have been accelerating the development of perennials by using advanced genotyping technology. They can now quickly analyze the genomes of plants with desirable traits to search for associations between genes and those traits. When a first generation of plants produces seeds, researchers sequence young plants directly to find the handful out of thousands that retain those traits (rather than waiting for them to grow to adulthood, which can take years).

Once perennial alternatives to annual crops are available, rolling them out could have a big impact on carbon emissions. The key is their root systems, which would sequester, in each cubic meter of topsoil, an amount of carbon equivalent to 1 percent of the mass of that dirt. Douglas Kell, chief executive of the U.K.’s Biotechnology and Biological Sciences Research Council, has calculated that replacing 2 percent of the world’s annual crops with perennials each year could remove enough carbon to halt the increase in atmospheric carbon dioxide. Converting all of the planet’s farmland to perennials would sequester the equivalent of 118 parts per million of carbon dioxide—enough, in other words, to pull the concentration of atmospheric greenhouse gases back to preindustrial levels. —Christopher Mims

Liquid Fuel for Electric Cars
A new type of battery could replace fossil fuels with nanotech crude

Better batteries are the key to electric cars that can drive for hundreds of miles between rechargings, but progress on existing technology is annoyingly incremental, and breakthroughs are a distant prospect. A new way of organizing the guts of modern batteries, however, has the potential to double the amount of energy such batteries can store.

The idea came to Massachusetts Institute of Technology professor Yet-Ming Chiang while he was on sabbatical at A123 Systems, the battery company he co-founded in 2001. What if there was a way to combine the best characteristics of so-called flow batteries, which push fluid electrolytes through the cell, with the energy density of today’s best lithium-ion batteries, the kind already in our consumer electronics?

Flow batteries, which store power in tanks of liquid electrolyte, have poor energy density, which is a measure of how much energy they can store. Their one advantage is that scaling them up is simple: you just build a bigger tank of energy-storing material.

Chiang and his colleagues constructed a working prototype of a battery that is as energy dense as a traditional lithium-ion battery but whose storage medium is essentially fluid, like a flow battery. Chiang calls it “Cambridge crude”—a black slurry of nanoscale particles and grains of energy-storing metals.

If you could visualize Cambridge crude under an electron microscope, you would see dust-size particles made of the same materials that make up the negative and positive electrodes in many lithium-ion batteries, such as lithium cobalt oxide (for the positive electrode) and graphite (for the negative one).

In between those relatively large particles, suspended in a liquid, would be the nanoscale particles made of carbon that are the secret sauce of this innovation. Clumping together into a spongelike network, they form “liquid wires” that connect the larger grains of the battery, where ions and electrons are stored. The result is a liquid that flows, even as its nanoscale components constantly maintain pathways for electrons to travel between its grains of energy-storage medium.

“It’s really a unique electrical composite,” Chiang says. “I don’t know of anything else that is like it.”

The fact that the working material of the battery can flow has raised some interesting possibilities, including the idea that cars equipped with these batteries could drive into a service station and fill up on Cambridge crude to replace their charge. Chiang’s collaborator on the project, W. Craig Carter of M.I.T., proposes that users might be able to switch out something resembling a propane tank filled with electrolyte rather than recharging at an outlet.

Transferring charged electrolyte into and out of his batteries is not the first commercial application that Chiang is pursuing, however. Along with Carter and entrepreneur Throop Wilder, he has already founded a new company, called 24M Technologies, to bring the team’s work to market. Carter and Chiang are guarded about what the company will release first, but they emphasize the suitability of these batteries for grid-storage applications. Even a relatively small amount of storage can have a significant impact on the performance of intermittent energy sources such as wind and solar, Chiang says. Utility-scale batteries based on his design would have at least 10 times the energy density of conventional flow batteries, making them more compact and potentially cheaper.

Cambridge crude has a long way to go before it can be commercially viable, however. “A skeptic may say that this new design offers significantly morechallenging problems to solve than benefits a potential solution may offer,” says the head of a major research university’s energy-storage program, who spoke on condition of anonymity so as not to offend a colleague. All the extra machinery required to pump the fluid through the battery’s cells adds unwanted mass to the system. “The weight and volume of the pumps, storage cylinders, tubes, and the extra needed weight and volume of the electrolyte and carbon additives could make [the technology heavier than] the state of the art.” These batteries may also not be as stable, across time and many cycles of charging and discharging, as conventional lithium-ion batteries.

A more fundamental issue is that charge times for these new batteries would be slower—two to four times slower, Carter says, than conventional ones. This creates a problem for cars, which require rapid transfers of power. One work-around could be pairing it with a conventional battery or an ultracapacitor, which can discharge its energy in a matter of seconds, to buffer transfers during braking and acceleration.

The new design has promise, however. A system that stores energy in “particulate fluids” should be compatible with almost any battery chemistry, says Yury Gogotsi, a materials engineer at Drexel University, making it a multiplier on future innovations in this area. “It opens up a new way of designing batteries,” Gogotsi says. —Christopher Mims

Nano-Size Germ Killers
Tiny knives could be important weapons against superbugs drug-resistant tuberculosis is roaring

Through Europe, according to the World Health Organization. Treatment options are few—antibiotics do not work on these highly evolved strains—and about 50 percent of people who contract the disease will die from it. The grim situation mirrors the fight against other drug-resistant diseases such as MRSA, a staph infection that claims 19,000 lives in the U.S. every year.

Hope comes in the form of a nanotech knife. Scientists working at IBM Research–Almaden have designed a nanoparticle capable of utterly destroying bacterial cells by piercing their membranes.

The nanoparticles’ shells have a positive charge, which binds them to negatively charged bacterial membranes. “The particle comes in, attaches, and turns itself inside out and drills into the membrane,” says Jim Hedrick, an IBM materials scientist working on the project with collaborators at Singapore’s Institute of Bioengineering and Nanotechnology. Without an intact membrane, the bacterium shrivels away like a punctured balloon. The nano­particles are harmless to humans—they do not touch red blood cells, for instance—because human cell membranes do not have the same electrical charge that bacterial membranes do. After the nanostructures have done their job, enzymes break them down, and the body flushes them out.

Hedrick hopes to see human trials of the nanoparticles in the next few years. If the approach holds up, doctors could squirt nanoparticle-infused gels and lotions onto hospital patients’ skin, warding off MRSA infections. Or workers could inject the particles into the bloodstream to halt systemic drug-resistant organisms, such as streptococci, which can cause sepsis and death. Even if it succeeds, such a treatment would have to overcome any unease over the idea of nanotech drills in the bloodstream. But the nastiest bacteria on the planet won’t succumb easily. —Elizabeth Svoboda