Science Briefs

Laughing Matters

We began studying laughter in part as a way to understand indexical, or personal, cues in vocal signals.

By Jo-Anne Bachorowshi, PhD, and Michael J. Owren, PhD

Laughing Matters
Laughter is seemingly ubiquitous in human social interactions, and yet we know surprisingly little about this unique human sound. We began studying laughter in part as a way to understand indexical, or personal, cues in vocal signals. Our reasoning was that in laughter we could measure the acoustic properties associated with characteristics like the vocalizer's biological sex and individual identity without the confounds of linguistically related components. We quickly learned that laughter is an extraordinarily rich vocal signal that is worth studying for its own sake, especially given the apparently key role it plays in human social interactions and relationships.

The Sounds of Laughter
Laughter emerges early in human development, being reliably elicited through tickling by about 4 months of age (Sroufe & Waters, 1976). Children born both deaf and blind also laugh at roughly the same age (Eibl-Eibesfeldt, 1989), indicating that this signal is deeply rooted in human biology (Deacon, 1989). Although sometimes regarded as a stereotyped signal (Provine & Yong, 1991), meaning that it tends to be constant in form, we have instead found laughter to be remarkably variable. In fact, laughter may be better thought of as a broad class of sounds with relatively distinct subtypes, each of which may function somewhat differently in a social interaction.

In order to characterize the acoustic features of laughter, we analyzed a corpus of 1024 laughs that were produced by 97 college-aged adults as they watched two humorous film clips (Bachorowski, Smoski, & Owren 2001).(1) The first salient finding was that laugh sounds can be readily grouped into voiced and unvoiced varieties. Voicing means that there is regular vibration of the vocal folds during production, giving the sound a tonal, vowel-like quality. The rate of that vibration is termed the fundamental frequency (F0), which is an important contributor to the perceived pitch of the sound. Voiced laughs are the versions that are commonly thought of as typical laughter, and can have a song-like quality if F0 happens to fluctuate in a melodic way over the course of several bursts. Unvoiced laughs can be very similar to voiced versions, but lack regular vocal-fold vibration. That makes them noisy and atonal in comparison, and include sounds that can be described as grunt-like or snort-like. In the grunt-like forms, the noisiness arises from turbulence in the supralaryngeal vocal tract, while the turbulence occurs primarily in the nasal cavities in snort-like forms. Many laughs consist of a mix of voiced and unvoiced components. In our study, both males and females were found to produce all the sub-types, although males produced more grunt-like laughs than females, whereas females produced more voiced laughs than did the males.(2)

More detailed acoustic analyses showed that laugh sounds are quite different from speech sounds. The average F0 of laughter is, for instance, much higher than is found for speech. In speech, modal male F0 values are about 120 Hz, meaning that the vocal folds are opening and closing about 120 times per second. The average male F0 in laughter was found to be more than twice as high, about 270 Hz. Similar outcomes occurred in females, who show modal F0 values of about 220 Hz in speech, but averaged about 400 Hz in laughter. F0 ranges were also very high. Although individual laughter "calls" or syllables were only about 0.2 seconds long, their F0's changed by an average of about 60 Hz for males and 85 Hz for females over this brief duration. A talker's pitch contour can also change over a comparable range in speech, but there the changes occur over the course of phrases or entire sentences. There were also cases of dramatically high F0 values in both cases, for instance with one male producing a call of 1245 Hz (which is in the soprano range!) and one female producing a call of 2083 Hz (sounding more bird-like than human).

Another important difference from speech was that voiced laughter typically occurred as an unarticulated vowel-meaning the neutral, "schwa" sound produced when vibrating the vocal folds while leaving the vocal tract, tongue, lips, and jaw all quite relaxed. In other words, rather than resembling sounds like "tee-hee-hee" or "ho-ho-ho," we found prototypical laughter to be a more generic or neutral sound that could be best described as "huh-huh-huh." This lack of articulation in laughter may mean that these sounds are particularly rich in indexical cues, for instance making it particularly easy to identify individuals from hearing their laughter.

Laughter in Social Context
As an integral part of human interaction, laughter occurs significantly more often in social than in solitary situations (Provine & Fischer, 1989). Even in a controlled laboratory environment, participants in the Bachorowski et al. (2001) study produced copious amounts of laughter within a 4-minute window. Participants in that study were tested either alone or with a social partner, with the social partner being either a same-sex friend, an other-sex friend, a same-sex stranger, or an other-sex stranger. We were therefore able to assess whether the acoustic variability in laughter described above was differentially associated with these five contexts (Bachorowski, Smoski, Tomarken, & Owren, 2004).

Contrary to popular belief, we did not find any evidence that females laugh more than males.(3) We did find, however, that social context was importantly associated with sex differences in both the acoustics and rate of laughing. Male laughter seemed to be driven more by whether the social partner was a friend or stranger than whether that individual was male or female-although there was some influence of the latter as well. Specifically, males tested with a friend-especially a male friend-produced more laughter and more acoustically extreme laughs (e.g., laughs with higher F0s) than males tested with a stranger. Outcomes for females were not quite as clear, but did indicate a greater influence of the sex of the testing partner rather than whether they were a friend or a stranger. Female participants laughed more and produced more acoustically extreme laughs when tested with a male than with a female partner.

We have also examined the temporal patterning of laugh production (Smoski & Bachorowski, 2003), with an eye toward testing whether the individually distinctive laughter of a familiar social partner can elicit learned emotional responses in a listener (Owren & Bachorowski, 2003). One result providing preliminary support for this hypothesis was that friends who were tested together as a dyad were significantly more likely to laugh within one second of each other than were participants who were paired with strangers. Sex differences were also found in this "antiphonal" laugher, with females laughing more quickly than males in response to the partner beginning to laugh. This outcome suggests that signaling in females may be more finely tuned to social circumstances than in males.

Laughter Elicits Positive Emotion-Related Responses in Listeners
Following up on the remarkable acoustic variability of laughter noted earlier, we have gone on to test whether listeners in fact respond differently to different laugh sounds. The most obvious contrast to try was whether the laugh was voiced or unvoiced, and to have listeners simply rate how positive each sound was to them (Bachorowski & Owren, 2001). Five sets of listeners were asked to rate 70 laugh sounds, with each group responding to a different question: how positive or negative their emotional response was upon hearing each sound, how much they would like to meet the laugher, how sexy the laugher sounded, how friendly the laugher sounded, or how well they thought the laugh would work on a laugh track. Regardless of which question was posed, voiced laughs elicited much more positive ratings than did unvoiced laughs. This effect suggests that voiced laughter in particular elicits positive emotional responses in listeners.

Owren, Trivedi, Schulman, and Bachorowski (2004) then confirmed that this fundamental difference also occurred even if laughers were not explicitly attending to how positive or negative the sound was. Rather, the goal was to test whether voiced and unvoiced laughs triggered different automatic evaluations (Fazio & Olson, 2003). The task used was a version of the implicit association test (Greenwald, McGhee, & Schwartz, 1998), with participants pressing a button labeled "voiced" on the response box if a laugh they heard was voiced, and pressed a button labeled "breathy" if it was unvoiced. They were also asked to use the same buttons in the same way for another simple task, this time classifying each of spoken words as being either "pleasant" (e.g., love, vacation) or "unpleasant" (e.g., death, vomit). Finally, the two tasks were combined, still with only one sound heard on each trial, but randomly interspersing laughs and words. Response labels were also combined for the two buttons, being paired either as "voiced or pleasant" and "breathy or unpleasant," or as "voiced or unpleasant" and "breathy or pleasant." The results showed clear effects of how the labels were paired. When response options paired the two hypothesized positives ("voiced or pleasant") versus the two hypothesized negatives ("breathy or unpleasant"), responses were significantly faster than when the labels were paired incongruently. In other words, automatic responses to these laugh sounds closely paralleled findings from the explicit rating task: Voiced laughter elicited much more positive evaluation than did unvoiced laughter.

Why do Humans Laugh?
We believe that laughter likely evolved in early hominids as part of a long process of diverging from a common ancestor with chimpanzees as they invaded the new, more terrestrial niches opening up during the Pleistocene. Taking advantage of new ecological opportunities is proposed to have put a premium on coordinated and cooperative behavior in the new species, which among nonhumans is much more common among biological kin than among unrelated individuals. We suggest that both laughter and smiling evolved in hominids or early humans specifically because they facilitated the formation and maintenance of positive, cooperative relationships among nonkin (Owren & Bachorowski, 2001).

Theoretically, we propose that laughter "works" not because it expresses a state of positive emotion in vocalizers, but by inducing positive affective responses in others (Owren & Rendall, 1997; 2001). This affect- inducing effect thereby primes listeners to behave positively toward laughers. We thus suggest that laughing is a nonconscious strategy of social influence, a position we further believe is supported by finding that laughers use their sounds quite differently, depending on who they are with (Bachorowski et al., 2004; Grammer & Eibl-Eibesfeldt, 1990). As outlined in detail by Owren and Bachorowski (2003), one part of the rationale is that by having shared, positive experiences together, two individuals that are becoming friends also form positive conditioned associations to one another's laugh sounds. As a result, each can use his or her laughter to elicit positive feelings in the other. Continued, mutually positive interactions maintains those learned responses, which either of the individuals can then make use of to induce a positive emotional response when a socially challenging situations arises, such as when needing cooperation or explicit help from the other. If two individuals are strangers, laughter can still be helpful due to generalized effects learned over a lifetime of hearing laughter that is mostly associated with positive states and situations. However, another strategy is to produce laughter with high-impact acoustic features in situations where listeners are already in a positive state. The features in question are those that tend to be attention-getting and energizing to listeners, including high-F0 and dramatic F0 excursions. With friends, laughter can be used both to elicit learned positive responses and to accentuate those responses using high-impact laughter. Among strangers, the situation is not so simple. A laugher cannot draw on specific learned affective response in the others, and can only use high-impact laughter to reliably good effect is the audience is biased toward the positive.

These hypothesized differences between affect-inducing direct and indirect effects, as well as the functional importance of voiced and unvoiced laughter, warrant more detailed empirical testing. Also of interest is to pit this affect-induction view against representational accounts. In the latter, laughter is treated as a linguistic-like, referential signal that provides meaning about laugher state to the listener (see Grammer & Eibl-Eibesfeldt, 1990). Acoustic variability in this perspective emphasizes the "meaning" of laughter, such as this laugh means "I'm happy" or this laugh means "I'm anxious," much as the linguistic contrasts used in speech production correspond with concepts that are independent of the vocalizer but are yet understood by the listener. Given the notable variability we have found in laugh acoustics, despite participant self-report of equivalent emotional states (Bachorowski et al., 2004), we favor the affect-induction perspective.

Ready extensions of this work include examining the use of laughter by individuals diagnosed with particular conditions, such as Social Anxiety Disorder, and studying the ways in which culture shapes the use of laughter. Regardless of the specific question being asked, we believe that studying laughter will give us important information about how humans establish and maintain mutually cooperative relationships.

1. As in all our work, participants were not aware that laughter was specifically of interest until the end of their testing session.

2. For examples, go to the Vanderbilt website.

3. This absence of overall sex differences has since been replicated in several different laugh-production paradigms.

We thank Moria Smoski, a recent graduate student of Bachorowski, for her substantial contributions to the work described here. This research was funded in part by an NSF POWRE Award and a Vanderbilt University Discovery Award to Bachorowski.

Bachorowski, J.-A., & Owren, M. J. (2001). Not all laughs are alike: Voiced but not unvoiced laughter elicits positive affect in listeners. Psychological Science, 12, 252-257.

Bachorowski, J.-A., Smoski, M. J., & Owren, M. J. (2001). The acoustic features of laughter. Journal of the Acoustical Society of America, 110, 1581-1597.

Bachorowski, J.-A., Smoski, M. J., Tomarken, A., & Owren, M. J. (2004). Laugh rate and acoustics are associated with social context. Manuscript under revision.

Deacon, T. W. (1989). The neural circuitry underlying primate calls and human language. Human
Evolution, 4, 367-401.

Eibl-Eibesfeldt, I. (1989). Human ethology. NY: Aldine de Gruyter.

Fazio, R. H., & Olson, M. A. (2003). Implicit measures in social cognition: Their meaning and uses. Annual Review of Psychology, 54, 297-327.

Grammer, K., & Eibl-Eibesfeldt, I. (1990). The ritualization of laughter. In W. Koch (Ed.), Naturlichkeit der sprache und der kultur: Acta colloquii (pp. 192-214). Bochum: Brockmeyer.

Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in
implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464-1480.

Owren, M. J., & Bachorowski, J.-A. (2001). Smiling, laughter, and cooperative relationships: An
attempt to account for human expressions of positive emotions based on "selfish gene" evolution. In T. Mayne & G. A. Bonanno (Eds.), Emotion: Current issues and future development (pp. 152-191). NY: Guilford.

Owren, M. J., & Bachorowski, J.-A. (2003). Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior, 27, 183-200.

Owren, M. J., & Rendall, D. (1997). An affect-conditioning model of nonhuman primate signaling. In D. H. Owings, M. D. Beecher & N. S. Thompson (Eds.), Perspectives in ethology, Vol. 12: Communication (pp. 299-346). New York: Plenum.

Owren, M. J., & Rendall, D. (2001). Sound on the rebound: Bringing form and function back to the
forefront in understanding nonhuman primate vocal signaling. Evolutionary Anthropology, 10, 58-71.

Owren, M. J., Trivedi, N., Schulman, A. S., & Bachorowski, J.-A. (2004). Explicit and implicit evaluation of voiced versus unvoiced laughter. Manuscript in preparation.

Provine, R. R., & Fischer, K. R. (1989). Laughing, smiling, and talking: Relation to sleeping and
social context in humans. Ethology, 83, 295-305.

Provine, R. R., & Yong, Y. L. (1991). Laughter: A stereotyped human vocalization. Ethology, 89, 115-124.

Owren, M. J., & Bachorowski, J.-A. (2003). Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior, 27, 183-200.

Smoski, M.J., & Bachorowski, J.-A. (2003). Antiphonal laughter between friends and strangers. Cognition & Emotion, 17, 327-340.

Sroufe, L. A., & Waters, E. (1976). The ontogenesis of smiling and laughter: A perspective on the organization of development in infancy. Psychological Review, 83, 173-189.

About the Authors
Jo-Anne Bachorowski received her doctoral degree in Clinical Psychology from the University of Wisconsin-Madison in 1991. She is currently Associate Professor of Psychology and Co-Director of the doctoral program in Clinical Science at Vanderbilt University. Her research is broadly concerned with vocal communication, and focuses
on the production and perception of speech acoustics, emotional speech, and laughter. The approach taken to her work falls at the intersection of clinical, social, and cognitive science.

Michael J. Owren received his B.A. in Psychology from Reed College, and his PhD in Experimental Psychology from Indiana University. He is currently an Assistant Professor the Department of Psychology at Cornell University, and directs the Psychology of Voice and Sound Research Lab.