Problem: You've run a study and you've got some nice results—finding that, for instance, dieting seems to cause emotional distress. However, since you didn't randomly assign people to a dieting or non-dieting group, you can't be sure that some other factor didn't play into your finding.

Typically when faced with such research dilemmas, psychologists use the statistical tool known as an analysis of covariance (ANCOVA) to estimate how much measured confounding factors influenced their data. But a new study in Psychological Methods (Vol. 13, No. 4) shows that ANCOVAs as they are traditionally applied are untrustworthy and that new approaches can get results that are much better, say journal editors Steve West, PhD and Scott Maxwell, PhD.

In the article, study authors Joseph Schafer, PhD, of Penn State University, and Joseph Kang, PhD, of Northwestern University, demonstrate nine methods of estimating treatment effects using hypothetical data, including a spin on the traditional ANCOVA. They found them all to be superior to classic ANCOVAs.

Many of these methods, says Maxwell, represent new developments in statistics and psychologists may not be aware of them.

"They offer fairer, less-biased comparisons of treatment groups in non-randomized studies," Maxwell says.

To download the formulas and try them out yourself, visit www.stat.psu.edu/~jls/causal/index.html.

—S. Dingfelder