In the opening of The Matrix, columns of strange keyboard characters stream down an old monochrome computer screen. They represent the peeled back digital curtain of experience, reminding us that every taste, smell and color that we experience is, in a way, a deception—a story computed bit by literal bit in a brain working in the quiet darkness of the skull. We don’t need special hardware to enter the Matrix. We just need to understand the special hardware we’ve been given: our brain.

The reason we can’t bend experience to our liking, Matrix-style, is that we don’t really understand the neural code. There’s no Alan Turing for the brain who can study an arbitrary pattern of brain activity and say, “Right now an image of a beige cat is being experienced.” Neuroscientists know that the specific contents of a sensory experience have to do with the timing and/or spatial patterning of brain activity. But when put to the test, with even the most basic mechanistic questions, our ignorance quickly shows itself. If a couple of brain cells had fired a half-second earlier, would you still see the beige cat? What if three additional cells had fired in quick succession? In response, all neuroscientists could do was shrug and make some generic claims about codes, patterns and the likely importance of timing. But Dmitry Rinberg of New York University and his research group may have just uncovered a partial answer.

In a fascinating recent paper, the researchers used precisely controlled pinpoints of light to directly insert a phantom smell into a mouse’s olfactory brain centers, bypassing the nose altogether. They were also able to systematically adjust that pattern and test how the animal’s experience changed. The study is one of the most audacious and systematic efforts at “experience hacking” yet.

Implanting a specific, reproducible, easily adjustable and completely synthetic percept is no small feat. To do so, Rinberg and his colleagues used genetically modified mice with a light-sensitive channelrhodopsin protein smuggled into their olfactory neurons. When light shines on one of these modified neurons, it evokes neural activity—the brief electrical “spikes” that are the basic language of the nervous system—with timing that can be exquisitely controlled. Because the part of the brain that processes sensory information from the nose is conveniently located near the surface of the skull, the researchers were able to skip the nose and write in an artificial odor of their own design. By stimulating the olfactory brain directly, the team essentially had complete control over which cells were active, what their arrangement was and when they were activated. The scientists had created odors made to order with the flip of a simple light switch.

Most natural smells will evoke widespread and temporally complex activity in the brain. For the purposes of probing and hacking the neural code, though, the researchers opted for a modest and manageable pattern of six small points, randomly distributed and stimulated in succession—a six-note neuronal melody lasting about a third of a second. The mice will never be able to tell us for sure, but this pattern of “notes” presumably smelled like something to them because it could be distinguished from other odors, as well as other six-note patterns in behavioral tests.

In the key part of the experiment, the mice played a game of “spot the difference.” Because they were first trained to exhibit a licking behavior only in response to the original six-note template, the experimenters could measure how much licking persisted as the pattern was adjusted—and thus how much the mice were fooled by the change. If a specific change—say, leaving out just the first note of the ensemble—was detected easily and reliably, then it was an indicator that that note was consequential to the experience. In contrast, if, for example, changing the identity of the sixth neuronal note wasn’t noticeable, then it had less of an effect on the experience. Consistent with earlier work, much of which was done by Rinberg’s group, the early neuronal notes tended to be more information-rich and important for perception than the later ones. The precise timing of neural activity, more generally, was found to be a key variable for odor coding, which contradicted some influential models that had argued that the brain disregards fine-scaled timing differences. The brain, it seems, cares about the ordering of its notes into melodic patterns—and doesn’t just hear them as stacked chords.

Ideas about neural coding were historically developed from the study of communication systems and computers, meaning they tended to be pretty abstract and framed in terms of idealized “gates,” “nodes” and “channels.” While there’s no shortage of high-level theoretical proposals concerning the storage, representation and routing of information in the brain, they are quite difficult to test in the arena of flesh, blood and behavior. Given this situation, support for theoretical paradigms is often based on evidence that is indirect and correlative, even if it is highly suggestive, and tantalizingly analogous to processes observed in digital computers. The beauty of the Rinberg team’s paradigm is that it so readily makes the abstract testable (at least, in the context of olfactory coding).

As an example of such a test, take the theoretical proposal of “bar code” representation, in which even the slightest change in a pattern of neural activity—a single cell failing to fire, for example—results in a completely different sensory experience. If this hypothetical highly-finicky coding scheme were actually used by the brain, then a single small tweak of the original six-note template pattern should be just as noticeable as a completely new pattern. In fact, the researchers found nearly the opposite. Just like one flat note in a melody doesn’t render it completely unrecognizable, one slightly nudged note of the original odor “melody” only changed the mouse’s experience slightly. Importantly, as more “wrong notes” were deliberately added, they had a simple additive effect on experience (at least, as measured by the animal’s ability to distinguish between smells). Perhaps most impressively of all, the team incorporated this observation about the code’s linearity into a statistical model that accurately predicted the mouse’s behavioral response to any arbitrary scrambling of the six-note pattern.

The paper is an unprecedentedly granular look at what, in the brain, makes a given experience that particular experience. The answer, at least in the context of olfaction, has a humanistic ring to it: an experience is a matter of timing and the sum of many small particulars. It’s still not clear how generalizable these results are outside of olfaction or sensation more broadly. Different brain areas have different computational goals and constraints, so it may be more accurate to speak of the organ’s various codes than some single all-purpose one. We’re also still mostly in the dark about how to stimulate the brain to cook up a complex perceptual experience that’s chosen in advance. Rinberg and his colleagues’ work very strategically only asked how things smelled relative to a starting template. For now, the Matrix is still a long way off. But if we were to achieve full-on Matrix-like simulations in the distant future, this study will have been an important early milestone.