Understanding how brains work is one of the greatest scientific challenges of our times, but despite the impression sometimes given in the popular press, researchers are still a long way from some basic levels of understanding. A project recently funded by the Obama administration's BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative is one of several approaches promising to deliver novel insights by developing new tools that involves a marriage of nanotechnology and optics.
There are close to 100 billion neurons in the human brain. Researchers know a lot about how these individual cells behave, primarily through “electrophysiology,” which involves sticking fine electrodes into cells to record their electrical activity. We also know a fair amount about the gross organization of the brain into partially specialized anatomical regions, thanks to whole-brain imaging technologies like functional magnetic resonance imaging (fMRI), which measure how blood oxygen levels change as regions that work harder demand more oxygen to fuel metabolism. We know little, however, about how the brain is organized into distributed “circuits” that underlie faculties like, memory or perception. And we know even less about how, or even if, cells are arranged into “local processors” that might act as components in such networks.
We also lack knowledge regarding the “code” large numbers of cells use to communicate and interact. This is crucial, because mental phenomena likely emerge from the simultaneous activity of many thousands, or millions, of interacting neurons. In other words, neuroscientists have yet to decipher the “language” of the brain. “The first phase is learning what the brain's natural language is. If your resolution [in a hypothetical language detector] is too coarse, so you're averaging over paragraphs, or chapters, you can't hear individual words or discern letters,” says physicist Michael Roukes of the California Institute of Technology, one of the authors of the “Brain Activity Map” (BAM) paper published in 2012 in Neuron that inspired the BRAIN Initiative. “Once we have that, we could talk to the brain in complete sentences.”
This is the gap BRAIN aims to address. Launched in 2014 with an initial pot of more than $100 million, the idea is to encourage the development of new technologies for interacting with massively greater numbers of neurons than has previously been possible. The hope is that once researchers understand how the brain works (with cellular detail but across the whole brain) they'll have better understanding of neurodegenerative diseases, like Alzheimer's and psychiatric disorders like schizophrenia or depression.
Today’s state-of-the-art technology in the field is optical imaging, mainly using calcium indicators—fluorescent proteins introduced into cells via genetic tweaks, which emit light in response to the calcium level changes caused by neurons firing. These signals are recorded using special microscopes that produce light, as the indicators need to absorb photons in order to then emit these light particles. This can be combined with optogenetics, a technique that genetically modifies cells so they can be activated using light, allowing researchers to both observe and control neural activity.
Some incredible advances have already been made using these tools. For example, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus, led by Misha Ahrens, published a study in 2013 in Nature Methods in which they recorded activity from almost all of the neurons of zebra fish larvae brains. Zebra fish larvae are used because they are easily genetically tweaked, small and, crucially, transparent. The researchers refined a technique called light-sheet microscopy, which uses lasers to produce planes of light that illuminate the brain one cross-section at a time. The fish were genetically engineered with calcium indicators so the researchers were able to generate two-dimensional pictures of neural activity, which they then stacked into three-dimensional images, capturing 90 percent of the activity of the zebra fish’s 100,000 brain cells.
As remarkable as this achievement was, it shares a limitation with all “free-space” optical techniques that direct external light into the brain: light only penetrates so far into nontransparent tissue. Using two-photon microscopy, which uses high-wavelength light, the deepest tissue that can be imaged is two millimeters. This limits the regions that can be studied in animals where the outer structure, the cortex, is thicker than that. One of the core efforts of the BRAIN Initiative will be to push these limits. “People can use three-photon imaging to get deeper,” says neuroscientist Rafael Yuste of Columbia University, who pioneered calcium imaging and was a co-author of the BAM paper. The technology is now capable of penetrating three millimeters into tissue, he says. (Higher wavelength light penetrates further, but has less energy, so more photons are needed to illuminate the indicators.)
An alternative approach is being taken by a multidisciplinary collaboration of research groups, led by Roukes. Funded by a recent BRAIN grant, his team plans to combine optical methods with nanotechnology to produce nanoscale implants that are inserted into the brain but which interact with cells optically, at depths light can't otherwise reach. “With optical techniques where you're doing standoff sensing, as you go deeper, you lose resolution; the other paradigm is to implant things in the brain,” Roukes says. “Extremely narrow wires can be implanted slowly and tolerated, as long as you don't displace too much tissue.”
They call the technology “integrated neurophotonics.” The needles, or “shanks,” are studded with “emitter” and “detector” pixels, and optical waveguides (essentially tiny optic fibers) route light to the emitters, which use diffraction to send cell-size light beams into the brain. In effect, an optical imager is placed inside the brain. “It's an amalgamation of many different building blocks that applies photonic chip technology to functional brain imaging,” Roukes says. “It's exciting to think about how to use all these bricks to build a different kind of cathedral than has ever been created before.”
One of the project's early aims is to record from every neuron in a one-millimeter3 volume of tissue. “We can't understand the entire brain in one fell swoop, we've got to find some pared-down problems,” Roukes says. “The question is: Can we identify some sort of regional processor in the brain that we could understand deeply in the next 10 years?” There are small structures in the cortex called “cortical columns” where internal connections are dense and outward connections are sparse, making them likely candidates for being local processors. In mice these are one millimeter wide, with a one-millimeter3 volume containing around 100,000 cells—in other words, an ideal early target for study.
Roukes's group is also pushing conventional electrical probes to their limit. They have built nanorobes with needles of similar width to cells (around 20 micrometers), studded with nanoelectrodes, which poke into the space between cells. But because the distance over which electrodes can pick out signals from single cells amidst the cacophony of activity is limited, each electrode only allows researchers to record from, on average, one or two cells.
Such probes can currently record from around 1000 neurons. Scaling that up to 100,000 is “an engineering and financial problem” Roukes says, but this would have to be distributed across the brain, because recording every cell in one millimeter3 of tissue would involve around 70,000 electrodes, a level of intrusion far too likely to disturb cell function and damage tissue. Photonic probes might solve this problem. “The distance over which you can resolve individual neurons is much longer for optical than electrical interrogation,” Roukes says. “We can pick up 20 to 50 neurons, so we need fewer recording sites, which means we can space things out and make it less perturbative; that's why this approach looks very promising.”
The approach could bring the goal of recording from every cell in a millimeter3 volume in reach within two years. And if one probe can interact with 100,000 cells, 10 could interface with a million—the ultimate target of the project. All of which could potentially be done deeper inside the brain than is currently possible using free-space optics, and with less damage (and recording from many more neurons) than using “endoscope”-type methods for pushing microscopes deep into brains.
Everything is being developed in partnership with a manufacturing foundry, so the technology could be easily mass-produced and made available to the research community. Initial testing will be performed in mice but one of the project's neuroscientists, Andreas Tolias at Baylor College of Medicine in Houston also works with nonhuman primates, and plans to ultimately conduct tests in monkeys.
Extending it to humans won't be simple, however. “There's all sorts of issues with translating this to humans,” Roukes says. “At no time soon will that be possible.” Firstly, optogenetics involves genetic modification, and people are understandably hesitant to modify their genes. Also, the long-term biological compatibility of the implants in higher mammals is uncertain, especially as brains jostle as we move and breathe. “Most of the challenges will probably be around getting these shanks in without acute or chronic immune response,” says biophysicist Adam Cohen of Harvard University. “And without affecting circulation, popping blood vessels or having problems when the animal moves.” Then there's the matter of a surgical procedure to open the skull.
An alternative that might eventually be applied to humans is “neural dust.” Engineer and neuroscientist Jose Carmena of the University of California, Berkeley, and his colleagues, are thinking about nanoscale sensors incorporating wireless communications technology. “The idea is to build small sensors that record activity from local neighborhoods and transmit information wirelessly from deep in the brain,” Yuste says. “It's a third angle that's further in the future.”
Meanwhile, nanophotonics will benefit from related advances, such as better indicators. “All the details in the timing of individual spikes is what tells us what the brain is doing,” Roukes says. “And calcium reporters are slow, so they smear out some of this activity and lose information.” Voltage indicators are faster and record the neural signal researchers are most interested in, but they produce weaker and noisier signals. There are also indicators that report different types of activity—like other chemicals, neurotransmitters and even the actual physical force of moving parts of cells. “The brain is a complex chemical system and [the] techniques for optical interactions over large volumes would be applicable to many different indicators,” says Cohen, who mainly works on developing such tools.
The potential applications are numerous and profound. “These tools will let us start to understand how complex behaviors arise from the ensemble of single-cell activity patterns,” Cohen says. “One might also use it to explore which areas are dysregulated in diseases and how those patterns lead to symptoms of the disease.” Brain–machine interfaces and neural prosthetics are other areas that will benefit. “This could address visual prosthetics for people who can't have retinal implants because the optic nerve is damaged,” Roukes says. “We could do direct interrogation and patterned stimulation of the visual cortex.”
Which of the approaches turns out to be most useful isn't the important question. A combination will likely be the ultimate answer. “There's a wide array of technologies on the table, and they're not mutually exclusive,” Yuste says. “This is not winner-take-all.”