Cover Story

Is language hard-wired? As everyone knows from intro psych, Noam Chomsky thought so. He theorized that babies base language acquisition on an innate linguistic knowledge known as universal grammar-a system of principles and rules common to all languages.

Now that view's being challenged as psychologists re-evaluate it using advances in statistical learning and computational modeling. And their findings may provide new insight into how children acquire and process language, something cognitive scientists still grapple with. And as new research continues to unfold, it may lead to new treatments for children who have language disorders.

"It's been a major tenet of the field of linguistics that language could not be learnable, so it had to be innate," says Jay McClelland, PhD, a professor of psychology at Stanford University and a pioneer in using neural network modeling to better understand language acquisition. "What we're finding today is that this has to be rethought."

Reproducing language acquisition

In his book "The Language Instinct" (William Morrow, 1994), Stephen Pinker, PhD, argues that children begin learning words at a rate of one every two hours by their second birthday. While notably incompetent at many other activities at this age, children develop a firm grasp on language, without much error, relatively quickly. Morten Christiansen, PhD, a psychology professor at Cornell University, and a team of international psychologists are using neural network simulations and computer-based analyses of child-directed speech to unearth just how they do it.

Their findings suggest that children absorb the rules of language from adult conversations, particularly those directed specifically to them, much more than scientists originally thought. Specifically, children heed multiple cues related to how a word sounds, its length, pitch and where it occurs in a phrase or sentence.

Christiansen tested this multiple-cue integration theory using a series of computer simulations. The results, presented at the 2001 Cognitive Science Society conference, indicate that heeding multiple cues bolsters language learning. They also suggest that computer modeling accurately simulates a toddler's ability to recognize words and comprehend simple sentences.

Follow-up experiments show that children appear to use these multiple cues when learning new words. While initial research looked only at language acquisition in English, Christiansen and his colleagues recently tested their multiple-cue integration analyses in French, Dutch and Japanese, with similar results.

"We don't know yet whether we can completely discard the notion of innate knowledge in the classical Chomskyan sense, but we do know that it's likely to be much less important to explaining language acquisition," Christiansen says.

Some experts caution, however, that computational modeling cannot yet fully reproduce the complexities of social processes at the foundation of language learning. "A lot of what we do in understanding language is make inferences about what our listener knows, and what kind of knowledge we share in common," says James Morgan, PhD, professor of cognitive and linguistic sciences at Brown University. "It's tremendously difficult to program that kind of information into computers."

Developmental psycholinguists, such as Nancy Budwig, PhD, professor of developmental psychology at Clark University in Worchester, Mass., underscore this point, adding that social participation is an essential ingredient in the language-learning process for the human infant.

Modeling language impairment

If neural networking proves to be a good model for human language-learning, however, Christiansen's research may give new hope to people with language disorders.

His research may change how we think about language impairment, particularly among the 6 to 7 percent of U.S. children affected by Specific Language Impairment (SLI), a communication disorder in which a child has difficulty understanding or using words in sentences. SLI is also often referred to as developmental language disorder, language delay or developmental dysphasia.

"The classical view suggests that SLI is caused by a breakdown in one or more language modules, due to some sort of genetic impairment," he says. "However, an emerging perspective suggests that SLI is not a language disorder but rather a broader deficit in underlying learning mechanisms."

This new characterization could perhaps spur new SLI treatments that target the cognitive skills that underlie language, rather than focusing exclusively on a child's language impairment.

In fact, in collaboration with experts in speech and hearing, Christiansen is planning to use neural network modeling to evaluate the potential of such treatments. The modeling allows them to vet experimental approaches before trying them out on actual children, where the wrong approach might negatively affect a child's ability to process language.

"Is this work going to lead to computers that can actually learn and use language in the same manner that humans do-probably not in our lifetime," says Morgan. "But this research is really central to illuminating what the nature of human nature is."