When deciding whether to admit expert testimony on the reliability of eyewitness identifications or any psychological topic, judges in some jurisdictions must determine whether the research presented by an expert is generally accepted in the field from which it came. Thus, courts look to experts in the field to determine whether the science being presented is sound. What happens when the experts disagree?
A recent case explores these issues.
In The People of the State of New York v. Nicol LeGrand (2001), LeGrand was charged with the 1991 second-degree murder of a taxicab driver. No suspects were found until the 1998 arrest of LeGrand in connection with a burglary. Three of the four witnesses identified LeGrand in a photo lineup and one also identified LeGrand in a live lineup. At trial, several witnesses again identified LeGrand; several other witnesses could not identify the defendant as the perpetrator. The jury hung, and the case was declared a mistrial.
As was the case at the original trial, eyewitness identifications obtained seven years after the witnessed event would be the central evidence at the retrial because there was no evidence to corroborate the identification testimony. Thus, defense counsel sought to enter testimony from psychologist Roy Malpass, PhD, about the factors influencing the reliability of eyewitness identifications.
The battle of the psychological experts
In a preliminary admissibility hearing, Malpass testified about the research on three factors that influence the reliability of eyewitness identifications. He said that although there is a statistically significant correlation between witnesses' confidence and accuracy, the size of this relationship is very small. Second, he testified that postevent information, such as learning that other witnesses had identified the same suspect, could alter witnesses' confidence in the accuracy of their identifications. Finally, Malpass reported on the results of a meta-analysis suggesting that the presence of a weapon during the commission of a crime decreased eyewitness identification accuracy. He also testified about a survey of eyewitness experts, published in the American Psychologist, indicating that more than 80 percent of respondents agreed that these phenomena were reliable enough to warrant testifying about them in legal proceedings.
The prosecutor called Ebbe Ebbesen, PhD, who testified that these phenomena were not reliable, citing articles in which researchers concluded that more research was needed to clarify the boundary conditions of these phenomena. He testified also that eyewitness research should not be presented to a jury because the effects across studies were inconsistent and psychologists had not explored all variables that might influence eyewitness identifications and the possible interactions among these variables. Finally, Ebbesen argued that the American Psychologist survey was flawed because researchers surveyed a restricted pool of experts (those who do research on eyewitness issues) as opposed to a more inclusive pool (those who do research on memory) and that the response rate (34 percent) was inadequate.
The judge's ruling
The judge concluded that the evidence on the confidence-accuracy relationship was not generally accepted because prevailing views on the nature of the relationship had changed over the years as new evidence had been obtained, that the evidence on confidence malleability was not generally accepted because of the inadequacies Ebbesen noted with the expert survey, and that the evidence on weapon focus was not generally accepted because some of the studies included in the meta-analysis found no effect of weapon (i.e., the studies were inconsistent). On those grounds, he ruled that the proffered testimony on eyewitness identifications was not admissible.
Although other issues were raised in the judge's opinion, the judge's decision appears to be based, at least in part, on a misconception of how scientific fields generate knowledge. Science is not static; the accepted findings in a particular field are always evolving based on new research and new methods. If psychologists are to keep silent on an issue until our knowledge-base is complete, then they will never be able to provide data that is relevant to legal proceedings or to improve public policy.
Moreover, it is unclear whether an adversarial forum, such as a court proceeding, is the best venue for determining whether scientific research is generally accepted. The nature of the adversarial system pits two or more experts, specifically chosen for their opposing views, against one another, and their opposing views leads a judge to believe that there is controversy within a given field, even if the opposition expressed is a minority view point.
The ruling in this case highlights the need for panels of psychologists, chosen to reflect differing views on a particular topic, to draft position papers on psychological issues that may appear before courts, giving time for these papers to be circulated among experts in the field for comment and revision.
Letters to the Editor
- Send us a letter