Psychology has experienced a recent string of revelations of data fabrication and other lapses of scientific ethics. The case of Tilburg University's Diederick Stapel and his phony social psychology data is perhaps the most notorious, but others have also riveted attention in the media and within scholarly circles.
These cases raise serious concerns for the discipline. They also invite loose speculation about causes and interventions. Careful and thoughtful discourse can turn these unfortunate episodes into positive and constructive changes that will improve the quality and integrity of science.
Considerable discussion has already appeared around these issues. This includes commentary in prominent publications such as Nature and Science magazines and The Chronicle of Higher Education. Professional associations, including both APA and the Association for Psychological Science, have encouraged open discussion.
Much of the focus has centered on interventions that may help prevent such incidents in the future. Most frequently mentioned are replication and data sharing. The idea is that increased transparency at every level of the research process carries added checks and safeguards against the kind of data fraud seen in many of the recent cases.
Placing greater value on replication and nurturing a culture of data sharing is important and will ultimately improve the quality of our science. Preventing scientific misconduct, however, is not the reason to pursue them. We should support these practices because of their scientific merit. It would be an added bonus if they carry with them some sort of curative power.
Some of the focus has centered on understanding the causes of scientific misconduct. What would lead a scientist to fabricate data? Is psychology somehow more vulnerable than other disciplines?
Commenting in Nature, psychologist Jennifer Crocker, PhD, applies a classically psychological analysis to the slippery slope of scientific fraud. Minor transgressions can pave the way to larger, serious transgressions down the road, she says. Any intervention that can stop this slide in its early stages is valuable.
Yet another important insight from psychology may not be getting enough attention as we seek to understand these episodes, namely, the situations driving this unethical behavior. It may be that we are focusing too little attention on the situational pressures that push or pull people into such lapses.
Commenting in the June 9, 2005, issue of Nature, Martinson, Anderson and de Vries suggested that "little attention has so far been paid to the role of the broader research environment in compromising scientific integrity." It is possible that the very culture of science, with its reward structure and systems of incentives, is really the culprit here.
Some have suggested that these problems are more prevalent in psychology than in other fields. Yet studies of scientific misconduct across disciplines, such as that reported by Swazey, Anderson and Louis in a 1993 American Scientist article, suggest that psychology is no different. Indeed, the occurrence of misconduct may be much higher in other fields, especially when financial incentives are tied to research results. This is a challenge for all of science, not just psychology.
Incidents of scientific misconduct carry enormous costs. Careers are ruined, reputations are harmed, trust is compromised, and resources are wasted. It is too easy to make the dispositional attribution, to blame it on another "bad apple."
In the Dec. 3, 2011, issue of The Chronicle of Higher Education, psychologist Alan Kraut, PhD, makes the case that science can police itself. The activity and energy in response to the recent revelations are indeed a healthy sign of self-correcting. Lasting change will require each of us to take some share of responsibility for attending to facets of the environment that we collectively create.
Letters to the Editor
- Send us a letter