Feature

You just finished reading this sentence. Did you focus on each word, hearing it in your head? Or did you grasp its meaning while scanning the text? Or did you even notice?

The question of how we read has resided at the heart of a debate among cognitive psychologists and educators for more than a century. Views differ over whether we translate written letters directly into concepts--reading a word and knowing its meaning from its visual form--or whether we take an extra step by using phonology--the sounds that correspond to written letters--to sound out a word before processing it. But new research published in the July issue of APA's Psychological Review (Vol. 111, No. 3) attempts to unify the two alternatives using trained computational models to simulate reading acquisition.

While some psychologists disagree with the notion that the two theories of reading actually cooperate, the model posits that these two ideas--known as direct access and phonological mediation--not only coexist in the brain, but also depend on each other to divide the mental labor of reading.

Study co-author Mark S. Seidenberg, PhD, says he is excited by the prospect, noting that, "Hopefully we are proposing a solution to a 100-year debate."

Teaching a computer to read

In the study, Seidenberg, of the University of Wisconsin-Madison, and his former student Michael W. Harm, PhD, now at Stanford University, constructed a computer model that learned to produce the meanings and pronunciations of words from their spellings.

In the model, neuron-like units represented spellings, meanings and pronunciations. For example, the computer would produce binary codes for the word "dog" that corresponded with descriptive phrases such as "furry" and with sounds of the letters "d," "o" and "g." The researchers could then compare codes the computer produced with the correct codes to see if the computer grasped a word's accurate pronunciation and meaning.

The researchers loaded into the computer a 6,000-word list that presented common words like "the" more frequently. Over time, the computer program learned to produce the correct meanings and pronunciations for almost all the words, including homophones such as "plane" and "plain." Seidenberg and Harm found that when the computer used both methods--employing visual and phonological information--to determine meanings, it excelled far beyond instances when the researchers allowed it to use only one method.

Seidenberg says the result indicates that when people read, they use both visual and phonological processes in reading for meaning. "In other models, either the visual system would win or the phonological would win--they were independent," Seidenberg says. "In our model, the two are intimately related: A set of units in the network represents a word's meaning, and both pathways can activate those units."

The desire among researchers to connect the two reading mechanisms is not new. But Seidenberg and Harm's model is innovative because it suggests that the two theories work in tandem. Moreover, it allows for a precise measurement of different variables, says Linnea Ehri, PhD, a psychology professor at the City University of New York who researches reading acquisition.

"By using computer simulation to test theories of word recognition, Seidenberg can identify what he's changing and how the model reacts," she says. Using this precise, scientific technique, researchers can now study "whether what he finds in the model resembles what we see in human behavior," posits Ehri.

Brokering an agreement

Indeed, hundreds of studies over decades of research have clustered around the direct access and phonological mediation theories. In direct access reading, beginning readers learn to read by creating visual representations of entire words, allowing them to quickly recall the word when they see it again. The theory's proponents argue that reading phonologically is useless out of context.

Ken Goodman, EdD, professor emeritus at the University of Arizona, says the brain uses orthography, syntax and meaning when reading. Instruction focused on drilling phonics, he argues, renders pointless the entire purpose of learning to read, which is the construction of meaning.

"It's not a question of identifying the word, but how to make sense of the text," he says. "By sounding out words, you lose focus on the meaning. You have to use all language systems when reading."

The whole language movement became popular in the 1980s and remained a primary pedagogy into the 1990s. But recently, many researchers have revived the phonics-based theory of reading. Its defenders argue that to learn to read, people use phonemes--the basic sounds associated with printed letters--to sound out words. That is, instead of associating the word "cat" with the animal and remembering that word, as you would with direct access, you can break it down into "c," "a" and "t" and pronounce the word using the sounds related to those letters.

"Can you access meaning of words without accessing phonology? My research leads me to believe no," says Keith Rayner, PhD, a psychology professor at the University of Massachusetts Amherst who focuses on eye tracking in his reading research. He notes, for example, a study in which he replaced a word in context, such as "I sit on a sandy beach," with the homophone "beech" or the look-alike word "bench." People were less likely to notice the word swap in the homophone condition, suggesting, he says, that they used phonology to read. But Rayner also readily acknowledges that orthography, semantics and syntax are important in reading.

Indeed, many researchers say that the research argues for some connection between the two theories. After all, children tend to sound out letters when learning to read, they say, but more experienced readers can absorb text on sight.

"As we become more skilled, it may seem that we are bypassing phonology," says Rebecca Treiman, PhD, a professor of child developmental psychology at Washington University in St. Louis. "But that's oversimplified. Most likely, both children and adults use different approaches simultaneously."

Given this, she and others believe the most important aspect that studies such as Harm and Seidenberg's symbolize is movement toward a complex, integrated view of language acquisition.

According to Guy Van Orden, PhD, a psychology professor at Arizona State University, the effects of direct access or phonology vanish or reappear depending on a particular study's conditions, so no result will ever be definitive.

"All of the tasks that we've used in studies are context-sensitive," says Van Orden, who studies cognitive systems. "Those tasks were supposed to be our microscopes, but our arguments using them have tended to end in stalemate."

Although some researchers will always believe reading acquisition is an either/or question--using direct access or phonology--Van Orden and other experts hope to see continued integration of approaches.