Recently, some colleagues and I started a remediation training program for investigators who had engaged in some form of professional wrongdoing (DuBois, Anderson, and Chibnall 2013).1 While most investigators are referred to our program for violating animal care or human subject protection rules, some are referred for research misconduct, which includes fabrication or falsification of data and plagiarism. In this article, I want to explore a thought pattern that can contribute to research misconduct and major career problems. The following two cases are fictionalized composites of cases we have encountered in our training and research.
Case 1: Sally emigrated from China to pursue a research career in the United States. Her understanding of English is excellent, but she still finds it difficult to write papers in any language other than Mandarin. She is doing a federally funded postdoctoral fellowship in a health psychology lab. Her research focuses on behavioral interventions aimed at reducing obesity. As she writes up the results of her study, she finds two articles that are relevant — a systematic review article on the same subject and a study done by someone in her lab two years earlier. She borrows text extensively from both articles without citing them, and then submits her paper for publication. While the paper is still under review, she is visited by her institution's research integrity officer, who notifies her that the journal found plagiarized text in her article. Her case will be reported to a federal oversight body. Sally is shocked that her failure to cite sources is being treated so seriously.
Case 2: William has run a successful lab for years. He is surprised one day to find out he is being investigated for data fabrication. For many years, he and others in his lab dropped outliers and made up values when they had missing data — without reporting such activities in their manuscripts.2 He felt this just made it easier to get through peer review. It yielded clearer and stronger patterns, but never changed the basic conclusions of his research. An angry post-doctoral fellow whom he fired for consistently sloppy work reported him to the university's research integrity officer. William lost all of his research funding and his lab was closed. Three staff members lost their jobs, and two doctoral students had to find new mentors and start new projects. William admits that what he did was wrong. He feels intense shame and regrets the pain it caused his staff and students.
Although these cases may seem very different, the two researchers engaged in similar faulty thinking. They both thought they could bend research misconduct rules without actually committing research misconduct. Consider Sally's situation. I sometimes hear people attribute plagiarism by international researchers to a lack of knowledge of U.S. research rules. I find this explanation a bit insulting to Sally and difficult to believe. Sally had completed federally mandated training in the responsible conduct of research. She knew she was plagiarizing. However, she did not think it was terribly wrong. She thought it was similar to the violations of authorship rules that she observed regularly. For example, her lab director insisted on being listed as an author on all papers written by people in his lab even when he contributed nothing to the study — he even refused to review and edit the manuscripts. Nothing ever happened to him.
William's case is quite different because it involved data fabrication rather than plagiarism. He was born and trained in the United States, yet his thinking pattern was not fundamentally different from Sally's. He actually fired his postdoc for sloppy work — he prided himself on the fact that he was generally committed to truth and rigor in science. However, his case reminds of us a series of studies by Mazar, Amir, and Ariely (2008) that supported their theory that "people behave dishonestly enough to profit but honestly enough to delude themselves of their own integrity" (p. 633). That is, most people will cheat, but not so much that they feel the need to view themselves as cheaters. William would never have fabricated an entire dataset; he just added some missing data and dropped outliers that did not fit with his hypotheses. He was surprised when he was caught and even more surprised when he became labeled a federal criminal.
People regularly make exceptions to social rules and this is often tolerated. As Sally noted, not everything that is taught in responsible conduct of research courses is strictly followed in the real world. However, when confronted with research misconduct, our society does not tolerate exceptions. This can be confusing to researchers who observe questionable research practices with some regularity (Koocher and Keith-Spiegel, 2010). It can also cause problems for researchers when they are accustomed to cheating just a little, just enough to benefit but not so much as to view themselves as dishonest.
We will do well to learn from the pain of Sally, William and others who have found themselves in a similar position: Rules about research misconduct do not bend before breaking.
James M. DuBois, DSc, PhD, is professor of medicine and director of the Center for Clinical Research Ethics in the Division of General Medical Sciences at the Washington University School of Medicine.
1 Information on the Professionalism and Integrity in Research Program (P.I. Program) can be found online at: http://www.slu.edu/repair/.
2 The Federal Research Misconduct Policy defines data fabrication as "making up data or results and recording or reporting them." Falsification is "manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record." A finding of research misconduct requires that there be a "significant departure from accepted practices of the relevant research community." Dropping outliers according to a rule that is described when data are reported might not deviate from accepted practices in some fields; but dropping them just to support hypotheses without reporting that outliers were dropped deviates from accepted practices. (See: http://ori.dhhs.gov/federal-research-misconduct-policy, last visited July 1, 2013.)
- DuBois, James M., Emily E. Anderson, and John T. Chibnall. 2013. "Assessing the need for a research ethics remediation program." Clinical and Translational Science:n/a-n/a. doi: 10.1111/cts.12033.
- Koocher, G. P., and P. Keith-Spiegel. 2010. "Peers nip misconduct in the bud." Nature no. 466 (7305):438-40. doi: 10.1038/466438a.
- Mazar, Nina, On Amir, and Dan Ariely. 2008. "The dishonesty of honest people: A theory of self-concept maintenance." Journal of Marketing Research no. XLV:633-644.
Letters to the Editor
- Send us a letter