Feature

Psychologists who explore mediating and moderating factors--the intermediate factors that directly cause or influence the size of their experimental effects--have traditionally had to limit themselves to between-participant research designs, in which groups of participants are assigned to different experimental conditions.

But for within-participant experiments, in which each participant is assigned to two or more study conditions in turn, questions of mediation and moderation have posed a problem. A handful of researchers have improvised methods for examining mediation in within-participant studies, but no framework for doing so has been formally developed and tested. Further, the relation between mediation and moderation--two closely related concepts--has not been closely examined in within-participant contexts.

That's now changed with a method outlined in the June issue of APA's Psychological Methods (Vol. 6, No. 2), by psychologists Charles M. Judd, PhD, of the University of Colorado, David A. Kenny, PhD, of the University of Connecticut, and Gary H. McClelland, PhD, of the University of Colorado. The technique will allow researchers across many fields of psychology to more thoroughly explore the mechanisms that may underlie their effects.

"Fundamentally, psychology is a science that attempts to understand mechanisms and processes by which a therapy or intervention works," observes Judd. "If all we know is that an independent variable makes a difference, that's good, but it doesn't explain to us the mechanism by which those differences are produced."

Similarly, he says, it's important for psychologists to examine for whom a given treatment has its largest effects. Indeed, funding agencies such as the National Institutes of Health increasingly expect investigators to evaluate the effects of treatment on different populations, rather than assuming that treatment outcomes are the same for everyone.

"Until now," Judd notes, "researchers using within-participant designs haven't been able to do that."

"We've come a long way from the older models [in psychology] where we simply apply a stimulus and observe a response," comments Amiram D. Vinokur, PhD, a senior research scientist in the Institute for Social Research at the University of Michigan. "Now we're more ambitious--we'd like to see the chain of events that leads from an intervention to an outcome. This new method may offer a vision for researchers of how they can design their studies to take advantage of these analytic possibilities."

Method builds on familiar techniques

Within-participant research designs, when they're possible and practical, have one towering advantage over between-participant designs: By eliminating the many irrelevant, randomly occurring differences between individuals assigned to different experimental conditions, within-participant studies are able to detect differences between conditions using fewer participants than between-participant studies. That boost in statistical power means that researchers who use such designs can save time and money, advancing science faster.

What is more, in some fields of psychology, within-participant designs have long been the norm. For example, by convention and convenience in cognitive studies of memory, different experimental conditions are usually tested within participants. Such experimental designs, although valuable for their efficiency, have hindered researchers' ability to test the mechanisms that drive their effects, Judd and his colleagues argue.

"By being able to conduct mediation and moderation analyses in within-participant designs," explains Kenny, "researchers in many fields of psychology will be able to more precisely probe their theories."

The new procedure tests mediation and moderation within a general multiple regression framework. The method is appropriate, the researchers note, not only for within-participant designs, but more generally for any experimental designs in which data in different conditions are dependent on one another--for example, when partners in a couple are assigned to different treatment conditions.

In the new procedure, one first computes the difference between outcomes in two treatment conditions. Using this single value as a dependent variable, one can then examine what other factors--potential mediators or moderators--affect the magnitude of the difference. Using hypothetical data in their article, Judd and colleagues provide an algebraic demonstration that the results of the procedure do, in fact, detect mediation and moderation.

In the past decade, methodologists have developed powerful statistical techniques--sometimes called hierarchical, multilevel or random coefficient regression modeling--that can be brought to bear on questions of mediation and moderation. But such procedures would require that investigators learn new statistical techniques and software, Judd maintains. In contrast, the procedure that he and his colleagues describe can be conducted using any general-purpose multiple regression program.

"We purposely chose to do it all within a multiple-regression framework," Judd says, "because our strong belief is that most researchers just use ANOVA and multiple regression. We may get criticized for using old tools, but from my point of view, that's an advantage because that's what researchers use. If people want to be able to ask these questions, we don't want to force them to learn a new technique or buy new software."

Stephen G. West, PhD, a psychologist at Arizona State University and incoming editor of Psychological Methods, comments, "One of the very nice things about this procedure is that it makes the extension to within-participant designs absolutely transparent to substantive researchers."

In contrast, he explains, "The methods that currently exist are quite technical for most substantive psychologists and would involve learning an entirely new approach to data analysis that most researchers are not familiar with. This approach uses techniques with which nearly all psychologists are familiar."

That, the study authors hope, means the new method can have an impact across a broad swath of psychology--from laboratory studies of memory, decision-making or social interactions to clinical examinations of drug use, depression or risk behavior.

"I can clearly see that this new method will have repercussions in many areas because it is a general methodological technique that can be used almost anywhere," concludes Michigan's Vinokur. "I would definitely call it a significant leap forward."