The brain is built to multitask, as long as the tasks require different types of perception. Some scientists have proposed that when the brain processes information from any sense, those data are then converted to an abstract code. This “code central” theory helps to explain how we transfer rules learned through one sense to another.

But it also suggests that we should be prone to mixing up information coming in from two senses at once because they would both be reduced to the same code. To test that prediction, Christopher Conway of Indiana University and Morten Christiansen of Cornell University evaluated how well people could discern complex patterns in sequences of objects on a computer screen or sounds played through headphones. The sounds or objects appeared based on two complex sets of invented rules, also known as “grammars.”

Subjects were able to learn either grammar when it was presented by itself, through visual or auditory training. Code central theory would predict that when the two grammars were presented in the same learning session through different senses, subjects should be unable to distinguish between them. But that wasn't the case.

Instead they identified a grammar as correct only if it was presented through the same sensory stimuli in which it was learned. Surprisingly, people learned the grammars just as well whether they were presented one at a time or two at a time through different senses—excellent news for multitaskers. But performance plummeted if both grammars were presented with very similar stimuli, for example, two sets of abstract shapes or two sets of invented words.