Feature

There is a paradox in the scientific literature on music and language, say researchers, that begs for an explanation.

On the one hand are case studies of patients with brain damage who have lost the ability to use language or music. They suggest that language and music reside in separate and largely independent regions of the brain.

On the other hand are studies that explore normal brain function using techniques such as functional magnetic resonance imaging (fMRI). Contrary to the neuropsychological findings, these studies indicate that many of the same regions "light up" regardless of whether people are processing language or music.

To resolve the paradox, researchers are taking a close look at the basic processes that underlie auditory perception. Increasingly, they are finding that the idea of separate "music centers" and "language centers" in the brain is too crude to explain the complex relationship between the two.

Instead, both phenomena appear to depend on partially overlapping networks of brain regions, each responsible for a particular kind of auditory processing.

"Both music and language are really complex domains that contain many different cognitive processes within them," says Robert Zatorre, PhD, a psychologist and cognitive neuroscientist at McGill University's Montreal Neurological Institute.

Pitch and timing

All too often, however, people oversimplify the results of research on music and language, says Zatorre.

"Sometimes people say, 'Here's evidence that a certain part of the brain lights up to both language and music, and that means that both language and music depend on the same mechanism,'" says Zatorre. "In some respects, that's trivial--like saying if I anesthetize your larynx, you won't be able to speak or sing."

A better strategy, says Zatorre, is to try to understand what each brain region contributes to the overall process.

In his own work, Zatorre has focused on how the left and right auditory cortices analyze incoming auditory information.

Much evidence suggests that the left cortex processes speech, while the right processes music. Zatorre suspects the reason for the division of labor lies not in the categories "speech" and "music," but rather in a fundamental difference between the kinds of processes needed to understand them.

For speech, timing is critical. The difference between two consonants--such as d and t--can occur in less than 20 milliseconds, or a fiftieth of a second. So a neural system for processing speech needs to be exquisitely sensitive to rapid changes.

For music, however, pitch is most important. (Timing matters, too, but the relevant scale is hundreds, not tens, of milliseconds.) So a neural system for processing music needs to be able to make fine-grained distinctions between similar pitches--middle C and C sharp, for instance, whose frequencies differ by less than 20 hertz.

In a paper in Trends in Cognitive Sciences (Vol. 6, No. 1), Zatorre suggests that these two modes of processing--one focused on timing, the other on frequency--are incompatible on a neural level. As a result, the left and right auditory cortices divide the work between them: The left cortex specializes in timing, while the right specializes in pitch.

This approach reframes the difference between music and speech in terms of basic sensory differences. Any sequence of sounds in which fine-grained pitch information was critical would engage the right auditory cortex more than the left, Zatorre suggests--no matter whether it was labeled "speech," "music" or something else.

The theory has its critics, who suggest that labels do matter and that people will process identical sounds differently depending on whether they categorize them as music or speech. Nonetheless, it illustrates an approach that many researchers are taking.

"When people ask me, 'Where is music in the brain?' I just tell them it's everything above the neck," says Zatorre. "I think it's much more valuable to break [music] down into its components, and then identify what brain regions are involved in running those components."

Building structure

While Zatorre and his colleagues are using music and language to understand auditory perception, others are using them to understand higher-level cognitive processes.

"It's becoming clear that comparing music and language is a useful strategy in cognitive science," says Aniruddh Patel, PhD, a researcher at the Neurosciences Institute in La Jolla, Calif. "They're similar enough--they both consist of discrete elements combined in hierarchical sequences--to serve as foils for each other."

The similarity helps explain why some of the same brain regions are activated whether people are trying to understand music or language, says Patel.

In both cases, listeners are "building structure in time," he says. In language, listeners build structure out of nouns, verbs and other syntactical elements. In music, they build it out of notes related within a harmonic space.

The critical difference between the two may lie not in the process of building structure, says Patel, but in the kinds of representations that are being processed--English grammar versus Western musical conventions, for instance.

The idea offers a possible resolution of the paradox between neuroimaging and neuropsychology studies. Damage in representation areas, such as the temporal lobe, could lead to selective impairments in music or language. Damage in processing areas, such as the frontal cortex, would lead to impairments in both. Patel discusses these ideas in depth in the July issue of Nature Neuroscience (Vol. 6, No. 7), which is devoted to music and the brain.

"If you think of music and language as labels we give to complex brain processes, some of which are shared, then the idea that someone who has a language processing disorder might also have a music disorder isn't that unthinkable," says Patel.

Implicit learning

Like language, music is a highly complex, structured auditory stimulus; unlike language, it carries no explicit meaning. This critical difference is giving researchers a window on implicit learning--the process by which people come to know something without being able to explain it.

"[Research on music] gives you insight into how we acquire knowledge about nonverbal structures and how these influence our perceptions," notes psychologist Barbara Tillmann, PhD, a researcher for the Centre National de la Recherche Scientifique at the University of Lyon I in France.

"That's why it's always interesting to make the parallel with language, where--in addition to the associations between events and their syntactical relations--you also have all the semantic structures coming into play," she adds.

Researchers are finding that people use many of the same basic learning processes for both music and language. Tillmann and her colleagues, for example, have found that even nonmusicians acquire a sophisticated understanding of musical conventions through daily exposure, much as infants learn language. When a piece of music violates those conventions, nonmusicians are slower to process it--just as if they had heard a sentence with an unexpected ending. The finding is reported in the Journal of Experimental Psychology: Human Perception and Performance (Vol. 29, No. 2).

Other researchers are studying how learning develops during the first few months and years of life. Their goal is to understand when and how infants begin to understand music and language.

"We know from previous work that infants are very sensitive to fine nuances in speech," says Caroline Palmer, PhD, a psychologist at Ohio State University. "We wanted to ask, is this speech-specific, or is this part of a more general mechanism that discriminates, perceives and remembers those acoustic features?"

In the Journal of Memory and Language (Vol. 54, No. 4), Palmer and her colleagues reported that 10-month-old infants could learn to recognize particular performances of musical pieces. The process appears to be similar to the one infants use to remember voices.

In adults, Palmer has found another similarity between speech and music, reported in the Journal of Experimental Psychology: Learning, Memory, and Cognition (Vol. 19, No. 2). The kinds of errors people make while speaking or playing tend to be "smart" errors--notes or words that are wrong, but related conceptually to what they intended to produce.

Crucial differences between language and music remain, however. Sandra Trehub, PhD, a psychologist at the University of Toronto, has found that infants are sensitive to aspects of music that play no role in speech.

For instance, infants can recognize tunes even when they are transposed into other keys or played at a faster or slower tempo, and they learn tunes that follow universal musical conventions more easily than those that don't.

"Obviously, there are some culture-specific conventions--such as the major/minor distinction--and those have to be learned," Trehub says. "For the most part, however, we come equipped with what is needed to be musical."

Why should we be built to understand music? In July's Nature Neuroscience (Vol. 6, No. 7), Trehub offers one explanation--the idea that music, no less than language, is a powerful way of forging social connections.

"We have the intelligence and the necessary computational abilities, but non-human primates may have what it takes in those respects, too," says Trehub. "What they don't have is our intensely social nature. That may be the real biological basis of language as well as music--what motivates us to become a language user and to become musical."

RELATED ARTICLES