From the Executive Director

They Saw a Study: Clinicians Do Rely on the Science

Judgmental biases apply to everyone, scientists included.

By Steven Breckler, PhD

One of the all-time classic studies in social psychology was Hastorf and Cantril’s (1954) demonstration of selective perception and cognitive bias. In “They Saw a Game,” students at Dartmouth and Princeton were asked to comment on an especially rough football game played between the two schools just a few weeks earlier. Even in response to identical film footage of the game, it was clear that the Dartmouth and Princeton students fostered very different interpretations of what happened during the game.

Hastorf and Cantril’s study spawned a long line of research in social cognition. It is cited as a demonstration of cognitive biases that bear on every-day social perception. But it can also be applied to scientists’ perceptions and interpretations of research results. Indeed, judgmental biases apply to everyone, scientists included.

0110elephant

A recent review by Baker, McFall, and Shoham (2008) illustrates the point. Baker et al. were criticizing the poor utilization of evidence-based interventions in clinical psychology. The interpretation of Baker et al. is that clinical psychologists are ambivalent about science, giving preference to their own clinical experience over available research evidence.

Baker et al. cite a number of studies to support their view that clinical psychologists favor their intuition over the research literature. One of those studies was Stewart and Chambless (2007), which Baker et al. summarized as showing that “most clinicians give more weight to their personal experiences than to science in making decisions about intervention (p. 80).”

Although Baker et al. did not elaborate on the Stewart and Chambless study, it was considered significant enough that Sharon Begley picked up on it as part of her October 2009 Newsweek commentary, Ignoring the Evidence: Why do psychologists reject science? As Begley described it, “A 2008 survey of 591 psychologists in private practice found that they rely more on their own and colleagues’ experience than on science when deciding how to treat a patient.” [Despite the wrong year cited by Begley, it is clear that she was referring to the study of Stewart and Chambless (2007).]

The survey conducted by Stewart and Chambless asked clinicians about the sources of evidence they use in making treatment decisions, and what sources they rely upon to improve their therapy skills and effectiveness.  One view of the results is consistent with the interpretation favored by Baker et al. and by Begley that clinical psychologists give more weight to their past clinical experiences than to current research. Yet another view of the same results shows that clinical psychologists rely on both their past experiences and current research.

How could both interpretations follow from the same data?  In one set of survey items, practitioners were asked to rate a number of different influences on their treatment decisions.  They rated each potential influence on a 7-point scale (1 = Strongly Agree to 7 = Strongly Disagree). The two key sources of influence were “past clinical experiences” and “current research on treatment outcome.”  The average rating on the 7-point scale was 1.53 for “past clinical experience” and 2.86 for “current research on treatment outcome.” 

Stewart and Chambless conducted the usual statistical tests, which showed that “respondents rated clinical experience as significantly more important in their typical treatment decisions than treatment outcome research . . . (p. 273).” It was this comparison that presumably led Baker et al. and Begley to their conclusions.  But another fair interpretation is that the clinicians rated both their past clinical experience and treatment outcome research as important in their typical treatment decisions. Indeed, both ratings (1.53 and 2.86) were well below the scale midpoint (4.00), and both on the side indicating agreement that they were a source of influence on treatment decisions.

In another set of survey items, the practitioners were asked about the resources they rely upon to improve their therapy skills and effectiveness. They rated a variety of resources on a 7-point scale (1 = Never to 7 = Always).   Once again, the respondents acknowledged “past experiences with patients” as a key resource (average rating of 5.62), and “treatment materials informed by psychotherapy outcome research findings” as less frequently used (average rating of 4.80).

The statistical tests led Stewart and Chambless to conclude that “clinicians rated past experiences with patients as more important than treatment materials informed by research . . . (p. 274).”  But as before, another fair interpretation is that clinicians utilize both their past experiences and research findings as sources for improving their therapy skills and effectiveness.  Indeed, both ratings (5.62 and 4.80) were above the scale midpoint (4.00), and both on the side indicating that these sources are frequently used.

Just as the Princeton and Dartmouth students drew different interpretations from the same football game, we can draw different conclusions from the same survey results. One interpretation is that clinicians favor their own experience over science; another is that clinicians rely on both their own experience and science, with greater preference for their experience.

Which interpretation is favored may depend on the point one ultimately wants to make. A critic of the state of practice in clinical psychology may favor the former.  But those who believe that clinicians rely on a combination of clinical experience and knowledge of the research literature may favor the latter.

Perhaps the most interesting result from the Stewart and Chambless study came from a small experiment they conducted. Interestingly, the results from this experiment were not mentioned by Baker et al. or by Begley. Yet, these results speak most directly to the question of whether clinicians rely on evidence-based interventions.

Stewart and Chambless developed a hypothetical case description of a patient diagnosed with panic disorder. The clinicians were asked to read the case summary, and to indicate the kind of treatment they would recommend. About half of the clinicians were provided a summary of the treatment outcome literature for panic disorder, indicating that cognitive-behavior therapy (CBT) and pharmacotherapy are the best established treatments for this disorder. The other clinicians did not receive the research summary. Thus, the experimental question was whether provision of the research evidence would influence clinicians’ treatment recommendations (presumably in favor of CBT).

The results were quite amazing, and reassuring. Eighty-six percent of the clinicians who received the research summary indicated that they would use CBT.  Clearly, when provided a summary of the relevant scientific literature, the clinicians relied on the empirically-supported treatment.  What about the clinician’s who did not receive the research summary? Seventy-eight percent of them reported that they too would use CBT. Even without a summary of the research evidence, the majority of clinicians based their clinical judgment and recommendation on what the science indicates.  And it should be noted that Stewart and Chambless had reduced the sample for this experiment by removing those clinicians who had prior training in CBT for panic disorder.

Once again, we are faced with different interpretations of the same results. Stewart and Chambless focused on the “significant but small (p. 276)” difference between 78% and 86%.  Yet, the results are also clear that the vast majority of sampled clinicians would use the appropriate empirically-supported treatment for the hypothetical case, whether they are “briefed” on the science or not.

The extent to which expectations drive data interpretation was especially evident in the concluding words of Stewart and Chambless:

Given that clinicians overall reported that they mildly agree that research has an impact on daily practice, it is surprising that 82% chose CBT for the panic patient even after eliminating those who consider themselves experts in CBT for panic disorder.  This is particularly surprising given that only 45% of clinicians in the sample considered themselves cognitive-behavioral therapists.  This suggests that experimenter demand could be a factor, or that panic disorder is a special case in the clinical community where research has been widely publicized in the media and thus had a major impact (p. 278).

Alternatively, perhaps this suggests that clinicians know how and when to balance their own clinical experience with the research evidence, and that the claim by Baker et al. that clinicians eschew science in favor of personal experience is not very well-founded.

In any event, it is curious that Baker et al. would choose to focus on one part of the data from the publication of Stewart and Chambless, but not the other. Just as Hastorf and Cantril found with the Dartmouth and Princeton students, we can each draw different conclusions from the very same set of facts.

References

Baker, T. B., McFall, R. M., & Shoham, V.  (2008).  Current status and future prospects of clinical psychology: Toward a scientifically principled approach to mental and behavioral health care.  Psychological Science in the Public Interest, 9, 67-103.

Hastorf, A. H., & Cantril, H.  (1954).  They saw a game: A case study.  Journal of Abnormal and Social Psychology, 49, 129-134.

Stewart, R. E., & Chambless, D. L.  (2007).  Does psychotherapy research inform treatment decisions in private practice?  Journal of Clinical Psychology, 63, 267-281.