In Brief

Too many studies in psychology are underpowered, creating a buzz of confusing results in many psychological fields, according to an article in APA's Psychological Methods (Vol. 9, No. 2). The article, by psychologist Scott Maxwell, PhD, of the University of Notre Dame, examines reasons for and possible solutions to this problem.

Statistical power is the probability that a particular statistical test will be able to detect a statistically significant effect, such as a difference between two populations--assuming that the difference really does exist--and that it won't be masked by random variations between the two groups. Psychologists know that the greater the number of participants in a study, the greater the power of that study will be.

Given this knowledge, they have several rules of thumb they use to estimate the number of people needed in a study, Maxwell says. One rule, for example, suggests that if you are doing a regression analysis, your participants should number 10 times as many as the predictors in your study. These rules of thumb seem to work fairly well in one sense, Maxwell says--many researchers who use them do find significant results.

Unfortunately, he argues, those results can be misleading, because most researchers test more than one hypothesis in each study--and the power of a study to detect at least one true difference is usually much greater than its power to detect all the true differences. So two researchers might test the same three hypotheses in their studies, and each might come up with one significant result--but those significant results may be completely different from each other. The consequence, says Maxwell, is a cumulative literature that lacks coherency.

"What I find may look totally different than what you find, but you're going to get yours published, and I'm going to get mine published, and we're going to be contradicting each other," he says.

To combat this problem, Maxwell suggests, researchers need to think of creative ways to expand their participant pools. This might include collaborating with researchers at other institutions on large-scale studies. Also, meta-analyses that combine and analyze the results of many studies can help reduce the confusion caused by conflicting results from underpowered studies, he says.

Such suggestions are well worth considering, says Mark Appelbaum, PhD, a psychologist who studies statistical methods at the University of California, San Diego.

"This is a problem with a long, long history and no easy solution," comments Appelbaum. "And this paper does a good job integrating multiple themes. It's a paper that should be read--that's the bottom line."

--L. WINERMAN