Letters

Self-report rating scales

I READ WITH GREAT INTEREST the article about Linda Bartoshuk's advocacy of labeled magnitude scales in the October Monitor. Her main point appeared to be the potential dangers involved in using any sort of self-report rating scale when experimenters cannot be sure that the internal metric of individuals is relatively constant across people.

There seems to be one very likely reason for the lukewarm response she has received to such concerns. Simply put, single-shot, adjective labeled scales are not a common measure in experimental psychology, precisely because a single-item assessment is rarely reliable. A broad review of perceptual studies shows quite clearly that, when data are aggregated for interpretation, the most common measures employed are precisely those that thwart this "single measure" confound: signal detection theory, magnitude scaling and other such repeated measures designs. In such designs, individuals are typically working with a limited range of stimulus intensities, and are simply attempting to produce an ordinal/ratio assessment of the individual stimuli. In such a case it doesn't matter whether the internal metrics of individuals match: The experimenter can simply scale the responses across stimuli for each individual (thereby placing the data on the same scale for everyone).

When applied in this way, or in cases where internal metrics can be fairly stable across individuals (such as in many social applications), self-report ratings are perfectly viable tools. In either case, it seems likely that the response to the problem is small because the problem is, too.

ERIC C. ODGAARD, PHD

Yale School of Medicine

REGARDING THE ARTICLE about Dr. Linda Bartoshuk and taste perception, if I understood the story correctly, Dr. Bartoshuk does not "know about that paper or that literature," and I would certainly not call real individual differences an "artifact." A long time ago Harry Helson and probably hundreds of other Helsonians studied a fundamental psychological process called Adaptation Level. AL is exactly the phenomenon that caught Dr. Bartoshuk's attention; namely, that psychophysical judgments vary predictably with the past experience of the perceiver. Further, the problem of anchor points was addressed statistically several decades age by Winer, and still forms part of appendix E in the Winer, Brown, Michels 3rd edition of "Statistical Principles in Experimental Design."

GENE SACKETT, PHD

University of Washington

Response from Dr. Bartoshuk

ODGAARD AND SACKETT MAY be surprised to find that I am in substantial agreement with some of their points. My argument was made in the context of taste perception but concerns any comparisons of sensory (or hedonic) experiences across groups. We cannot share each other's experiences. Thus across-group comparisons using adjective-labeled scales rest on the implicit assumption that the adjectives reflect the same absolute perceived intensities to all. When they do not, the comparisons will be invalid. Specifically, genuine differences across groups may appear to be smaller than they really are and in some cases erroneous differences in the wrong direction will appear.

Odgaard suggests that the experimenter can place the data "on the same scale for everyone;" Sackett suggests using Winer's anchor point adjustment. These transformations are appropriate for within-subject comparisons or for across-group comparisons when the members have been randomly assigned. My concern focuses on across-group comparisons when the meaning of the labels on the scale is not the same to all groups.

I was delighted to find that Biernat and Manis have identified similar issues in other fields. Many early psychologists (Helson included) were concerned with conditions that would alter ratings for individual subjects. They would never have sanctioned using scales for across-group comparisons without proof that the adjective labels meant the same to everyone. Yet this practice has emerged in a number of fields in recent years. If readers have other examples of attempts to correct this error, I hope they will in a number of fiels in recent years. If readers have other examples of attempts to correct these errors, I hope they will share them.

LINDA BARTOSHUK, PHD

Yale University

THE OCTOBER MONITOR FEATURED an article describing Dr. Linda Bartoshuk's concerns about the problem of adjectival rating scale use in taste sensitivity research. Specifically, Dr. Bartoshuk suggests that judgments of the "saltiness" or "strength" of taste sensations are problematic in that their subjective nature renders cross-rater comparisons meaningless. We agree with these concerns, and want to point out that similar issues have been raised in social psychological research. In our own work on the problem of "shifting standards," we argue that trait terms can mean very different things depending on the social category membership of the person being described. The descriptors "tall" and "aggressive" mean something quite different when applied to a woman versus a man because of gender-based expectations (or stereotypes) perceivers hold about these attributes. These stereotypes lead perceivers to judge others relative to within-group standards; thus, it may be inappropriate to directly compare adjectival ratings of members of different groups because those ratings have not been made on a common metric. A man and a woman can each be described as "very tall," yet these equivalent descriptors may mask the fact that the man is (objectively) taller than the woman. Related comments on the "slipperiness" of trait terms can be seen in the work of Ziva Kunda (on contextual construal) and David Dunning (on egocentric trait definitions). We have attempted to solve the problem of making across-category judgment comparisons by providing raters with more externally anchored or "common rule" scales on which to make their evaluations (e.g., inches in the case of height). There are strong parallels between the problem and the potential solution in our work and in the rather different arena of taste sensation.

MONICA BIERNAT, PHD

University of Kansas

MELVIN MANIS, PHD

University of Michigan

APA and the UN?

JUST WHO IS IT THAT TOLD APA to seek United Nations (UN) nongovernmental organization (NGO) status? On what basis was an application for such status from APA filed in 1996? On whose authority was this done? Surely not mine!

I didn't join the APA in order to influence the UN; likewise, I didn't join my local tropical fish club in order to influence psychology. (there is an organization of psychologists with an avowed international agenda, the International Council of Psychologists, and I belong to it.) The already fragmented APA has no business pretending that it represents its 160,000-plus members in a forum that, without the able assistance of the APA, successfully manages to create political fragmentation on a global scale.

My bottom line is clear: I don't want APA attempting to influence the UN because I'm almost certain that APA will not represent my views when it talks to the UN--yet it has the audacity to do so in my name, without my permission.

FRANK J. GOLD, EDD

Fairbanks, Alaska

Dèja vu

I EXPERIENCED A FRUSTRATING dèja vu when I read Deborah Smith's October article "What makes a president great?" Since 1981 I have published one book and over a dozen articles on the predictors of presidential greatness. The articles appeared such top journals as APA's own Journal of Personality and Social Psychology (even including one lead article in a 1988 issue of JPSP). Several of my reported findings clearly anticipate those mentioned in the Monitor piece. For instance, I showed that presidents have been becoming increasingly more outgoing and friendly or extraverted. Even more important, I showed that the personal characteristic that best predicts a president's standing with posterity is intellectual brilliance. This was the only personality predictor that survived control for a host of other factors, both individual and situational. Curiously, this intellectual brilliance measure correlates 0.71 with the measure of openness to experience discussed in the article. Better yet, once intellectual brilliance is partialed out, the openness measure completely fails to predict presidential greatness. Hence, the only reason why openness to experience emerged as the most powerful predictor among the several NEO-type dimensions is simply that it correlates highly with general intellectual ability. When one of my colleagues saw the media coverage of this provocative research, he shouted down the hall "Someone is stealing your ideas!" I would not put it so strongly. But I would say that researchers sometimes reinvent the wheel. Even worse, at times the new wheel does not work as well as the old one!

DEAN KEITH SIMONTON, PHD

University of California, Davis

Response from the study authors

WE (RUBENZER, FASCHING- bauer, & Ones, 2000; Rubenzer & Faschingbauer, 2000) acknowledge Simonton's and others' (Winter, House) major contributions to this area. Simonton examined many situational (assassination in office, years served during wars, etc.) and personality variables to explain variance in presidential reputation. Only Intellectual Brilliance demonstrated incremental validity over situational variables. We studied personality's relationship with historical greatness and new criteria (ethical behavior in office). Situational variables about the president's term were not useful to us, since we sought predictors for behavior in office. Like Simonton, we found that Intellectual Brilliance is a good predictor. However, eight other personality traits explained variance over Intellectual Brilliance.

Our project offers insights to presidential traits not available from previous work. We compare the scores of presidents, as a group and individually, to those of typical Americans. We profile individual presidents in depth. For the first time, all 41 presidents have been assessed by multiple experts.

Simonton reviewed chapters of our book manuscript and generously provided the following for promotional use: "Testing the Presidents is by far the most comprehensive scientific study of presidential personality ever published....It must provide the starting point for all future debate about the intricate connection between a president's personality and his leadership."

(D.K. Simonton, personal communication, 11/ 1999).

We appreciate Simonton's support and respect his pioneering work. But, when the words "stealing ideas" are brandished, we feel we owe the readers, who may be unfamiliar with our study, a description of the differences in approaches and findings.

STEVEN J. RUBENZER, PHD

Houston

TOM FASCHINGBAUER, PHD

Richmond, Texas

DENIZ ONES, PHD

University of Minnesota

The Internet and the gender gap

IN THE OCTOBER MONITOR, I was interested to read "The Internet and computer games reinforce the gender gap" by Lisa Rabasca. The point that girls need to be free to develop an interest in technology design and not just its use is well taken.

I was concerned, however, with the undertones of stories such as these. That is: "girls are not like boys and that's a bad thing." While girls do need to be able to explore technology, sports and other such "boy things," and while they deserve the opportunities to succeed in traditionally male-dominated fields, what about the boys?

The assumption seems to be that boys are better, so the more girls can be like them, the better off they will be. Wouldn't boys benefit from being more like girls as well? I would like very much to hear about efforts to teach boys how to be more verbal, cooperative and emotionally literate. Our society in general may benefit from such initiatives.

SCOTT G. SHELP

California State University