Like many of her fellow undergraduates at the University of British Columbia, Lauren Emberson relied on public transportation to get around town. All too often, Emberson encountered a seemingly inescapable nuisance inside TransLink—Vancouver's crowded mass transit buses and trains: other passengers' cell phone conversations.
"They drove me up the wall," Emberson says. "There was no way to tune them out. I would be trying to read or listen to music and I felt like I couldn't continue with those tasks."
Currently a doctoral candidate in psychology at Cornell University, Emberson and her co-authors recently published a study that helps explain why hearing only one half of a cell phone conversation is so aggravating, yet so captivating. The researchers argue that such "half-alogues," as they dub them, make for dissonant eavesdropping because they are unpredictable. The less information we glean from a conversation, the harder our brains work to make sense of what we hear and the more difficult it is to stop listening. The findings, published online September 3 in Psychological Science, further suggest that cell phone half-alogues demand more of our attention than dialogues and decrease our performance on other cognitive tasks—whether we are sitting at a computer in the lab, trying to read on the subway or driving a car.
"I think it's a lovely paper," says Gerry Altmann, a psychologist who studies language processing at the University of York in England and was not involved in the recent study. "I think what is interesting is to find out exactly why these half-alogues are so disruptive, especially using cell phone conversations—it's a real-world activity that we all take part in."
The researchers first recruited two pairs of female college students, placed each student in a separate soundproof room, and asked the pairs to have real cell phone discussions based on provided conversation starters. After the calls ended each student summarized his or her conversation in a monologue. The experimenters recorded everything the students said with wireless microphones and used these recordings to construct three different types of 60-second audio clips: monologues and dialogues as well as half-alogues, in which only one member of the pair was heard.
Here is an excerpt from one of the half-alogues used in the study—an unpredictable sequence of blurts and gaps:
16.4 s: That's funny.
19.1 s: I know.
22.4 s: Uh—
23.8 s: That would have—
32.8 s: (cough/laugh)
43.7 s: Yeah, it—
After they had constructed their audio clips the researchers proceeded to the next phase of the experiment: They invited 24 Cornell undergraduates to complete two different tasks in the lab. In one task participants tracked a moving dot on a computer screen with a circular cursor, which required the same kind of steady concentration needed to stay in the appropriate lane while driving. In the other participants held four letters in memory and tried to hit a button only when these letters flashed on a computer screen, ignoring all other symbols. This task required the kind of attention used in correctly responding to traffic lights. As the participants completed these tasks their computers' speakers played clips from the cell phone conversations recorded earlier in the study.
When they heard a half-alogue, the participants' performance measurably decreased: They had trouble trailing the moving dot and made more mistakes in the letter-recognition task. Hearing monologues and dialogues, however, did not significantly reduce performance on the same tasks.
In a second, almost identical experiment, the researchers modified the recorded conversation clips so that they were incomprehensible, although they retained their fundamental acoustic features. This time, as the 17 additional participants completed their tasks they could discern human speech coming from the speakers, but they couldn't understand a word—somewhat like hearing someone talk underwater. In the second experiment the muffled half-alogues failed to distract the participants or reduce their performance on the attention-based tasks. What this shows, the researchers explain, is that half-alogues demand more of our attention not because of any inherently inconsistent acoustic properties, but because they contain so much less information than dialogues and are therefore far more unpredictable.
"In a dialogue we use a variety of different information to predict what comes next," Altmann says. "If I say, 'The lion is…' then your knowledge of lions allows you to predict certain things about what I might say next: maybe the lion is roaring, for instance. During a normal dialogue, we take any discrepancy between what we predict and what we get as an indication of how to change our expectations. In the half-alogue the problem is we are missing half of the information—we are prevented from doing the prediction. So we end up working very, very hard to try and make sense of it. That is what interferes with our attention: the less info we get from a conversation, the more resources it demands."
Emberson says that some people have wondered whether overheard cell phone conversations are annoying simply because they are so loud. But "in the study we control for loudness, so that doesn't account for the results," she says. Emberson also wonders whether people simply perceive overheard cell phone conversations as louder than typical speech because they monopolize our concentration.
The researchers note in their study that whereas previous studies have shown talking on the phone when behind the wheel impairs driving performance, their new research further implies that even overhearing the cell phone conversations of passengers could have the same dangerous attention-shifting effect.