Science Briefs

How We Organize Our Experience into Events


By Jeffrey M. Zacks

Jeffrey M. ZacksJeffrey M. Zacks is Associate Professor of Psychology and Radiology at Washington University in Saint Louis. His research focuses on event perception and spatial cognition, using converging cognitive neuroscience methods. He received his BA from Yale University and his PhD from Stanford University, both in Cognitive Psychology. In addition to his academic research he has worked in applied research at Bell Communications Research and Interval Research Corporation. He is the recipient of a Young Investigator Award in Experimental Psychology: Applied from the APA and the F.C. McGuigan Young Investigator Prize from the American Psychological Foundation, and is a fellow of the Association for Psychological Science and the Midwest Psychological Association.


Imagine you are sitting back enjoying your favorite broadcast television program. That relatively low-fidelity audiovisual signal is transmitted to your home at a rate of more than a thousand bits per second (about five times that rate if you are watching an high-definition broadcast). Yet the rate at which humans can encode information for later retrieval is vastly lower than this—most estimates are a few bits per second (Landauer, 1986). How does human perception and comprehension cope with this gap? One ubiquitous solution is chunking—treating a continuous region of an input space as belonging to one entity, while ignoring continuous gradations within that region. In the early 20th century, Gestalt psychologists identified chunking as one of the central problems of perception and cognition (Köhler, 1929). Most research since then has focused on chunking in space—grouping spatial regions into objects. However, chunking in time is probably at least as important. As James (1890, chap. XV) noted, people have the strong subjective sense that the stream of consciousness is segmented such that “what is happening now,” is distinct from what came just before. In terms of phenomenology, our experience of events such as pouring a cup of tea or opening a letter seems just as important as our experience of objects such as tables and bicycles. How does this come about?

Prediction, error, and memory updating

One answer is given by Event Segmentation Theory (EST; Zacks, Speer, Swallow, Braver, & Reynolds, 2007). EST starts from two ideas that are widely (though by no means uniformly) held in cognitive science and neuroscience. The first idea is that the impression we have of a unified sense of “what is happening now” arises from a set of memory representations that are maintained online by active neural processing. Collectively, these memory representations are referred to as working memory, and they are characterized by limited duration and capacity (Baddeley, 2003). The second idea is that comprehension is predictive. From vision (Enns & Lleras, 2008) to language (Elman, 2009) to learning (Maia, 2009), current theories propose that we process the present in part by predicting the near future. This has massive adaptive significance because it allows an organism to anticipate circumstances before they arise and respond proactively.

EST proposes that working memory representations of events exist because they improve perception and prediction. For example, if you are watching someone put on a pair of shoes, it is quite predictable that after the first shoe is tied the hands will move to the second shoe. You can thus use a representation of the shoe-tying event to perceive this movement pattern even if it is occluded by the shoe-tier’s body, and can predict something about it before it occurs. However, once the shoes have been tied this event representation will no longer be helpful. EST proposes that having ill-fitting event representations leads to transient increases in prediction error. When these occur, the system responds by updating working memory to form a new set of event representations. Prediction error returns to a low level and perception proceeds as before. (See Figure 1.) Computational simulations show that this architecture can take advantage of sequential structure in ongoing activity to improve prediction performance (Reynolds, Zacks, & Braver, 2007). A neurophysiological account proposes how the architecture can be implemented by interactions between cortical and subcortical neural systems (Zacks et al., 2007).

Fig. 1 
Figure 1. A schematic depiction of how event segmentation emerges from perceptual prediction and the updating of event models. a: Most of the time, sensory and perceptual processing leads to accurate predictions, guided by event models that maintain a stable representation of the current event. Event models are robust to moment-to-moment fluctuations in the perceptual input. b: When an unexpected change occurs, prediction error increases and this is detected by error monitoring processes. c: The error signal is broadcast throughout the brain. The event models’ states are reset based on the current sensory and perceptual information available; this transient processing is an event boundary. Prediction error then decreases and the event models settle into a new stable state. (Reproduced with permission from Kurby & Zacks, 2008.)

According to this theory, event segmentation is not merely a conscious strategy for comprehension or memory encoding. Rather, it is an ongoing feature of natural comprehension. One piece of evidence for this suggestion comes from studies that have used functional MRI (fMRI) to measure the neural correlates of event segmentation during ongoing comprehension. In the first of these studies (Zacks et al., 2001), viewers watched movies of everyday events while brain activity was recorded with fMRI. They then watched the movies again and segmented them into events, by pressing a button whenever they judged that a new event had begun. This allowed us to ask whether, during the initial viewing, there were transient changes in brain activity at those points that viewers would later go on to identify as event boundaries. There were. The regions involved included posterior perceptual processing areas, medial posterior cortex, and lateral dorsal frontal cortex. Similar results have since been obtained with simple geometric animations (Zacks, Swallow, Vettel, & McAvoy, 2006), narrative texts (Speer, Reynolds, & Zacks, 2007; Whitney et al., 2009), and feature films (Zacks, Swallow, Speer, & Maley, 2006).

EST also proposes that event boundaries will tend to occur when features of the situation being observed are changing the most, because these tend to be the times at which prediction errors increase. The evidence supports this proposal as well. Event boundaries correspond with points of change in body position (Newtson, Engquist, & Bois, 1977), motion features (Hard, Tversky, & Lang, 2006; Zacks, 2004), and situational features such as characters, goals, and spatial location (Zacks, Speer, & Reynolds, 2009). Furthermore, a substantial piece of the neural response to event boundaries is explained by these changes (Speer et al., 2007; Zacks, Swallow, Speer & Maley, 2006). This is consistent with the idea that the perception of event boundaries results from processing that is engendered by the changes.

In real life, events come to us through vision, hearing, and the other sensory modalities. Much of the research on event perception has focused on vision. However, there does not appear to be anything particularly special about the visual presentation of events. Events presented in narrative texts are segmented in much the same way as events presented in videos. In one series of studies (Zacks et al., 2009), we asked readers to segment events described in written narratives, presented as continuous printed texts, one clause at a time on a computer screen, one word at a time on the screen, or narrated over headphones. Each story was coded for changes in causes, characters, goals, objects, space, and time. In all cases, these changes predicted the locations of event boundaries, and the more dimensions changed the more likely a boundary was. In a parallel study, viewers segmented a feature film that had been coded for the same types of changes, and exactly the same effects on segmentation were observed. In neuroimaging studies, movies and texts produce similar evoked responses at event boundaries, and show similar relations between responses to feature changes and to event boundaries (see Figure 2).

How do such different inputs lead to similar event segmentation processing? One possibility is that when one reads a story one constructs a simulation of the events described (Barsalou, 2008; Zwaan, 2004), and then event segmentation operates on the resulting representations just as if they came from perception. An fMRI study of narrative reading provided support for this proposition (Speer, Reynolds, Swallow, & Zacks, 2009). This study showed that reading about particular changes in the narrated situation produced selective brain activity in regions associated with processing those changes in perception and action. For example, when readers read that a character interacted with a new object, they selectively activated brain regions associated with dominant-hand grasping.

Fig. 2 
Figure 2. Brain areas that showed significant transient changes during passive viewing of event boundaries. In each instance, participants comprehended the events during fMRI scanning. They were asked simply to view or read for comprehension. Subsequently, the participants were asked to segment the events and the locations of these event boundaries were used to interrogate the initial imaging data.

What about the updating of working memory representations? Studies of reading and film viewing suggest that crossing an event boundary changes how recently encountered information is retrieved. Several studies of reading have manipulated the presence of changes that should produce event boundaries (Rinck & Bower, 2000; Speer & Zacks, 2005; Zwaan, 1996). For example, time changes such as “an hour later” increase readers’ judgments that a new event has begun compared to “a moment later.” After reading of such time changes, material presented before the time change is less accessible. In a recent study using commercial feature films, Swallow and colleagues tested visual recognition of objects either during a current event or after an event boundary and observed large changes in recognition across event boundaries (Swallow, Zacks, & Abrams, 2009). In a following neuroimaging study, retrieval of information from a preceding event was associated with selective activation of medial temporal structures associated with long term memory (Swallow, Zacks, & Abrams, 2007).

If event segmentation plays a role in the updating of working memory, this should affect how information is encoded into long term memory. Event boundaries appear to act as anchors in long term memory, such that information encoded at event boundaries is remembered better later (Newtson, 1976). Supporting segmentation by inserting commercial breaks (Boltz, 1992) or pauses (Schwan, Garsoffky, & Hesse, 2000) into movies at natural event boundaries can improve memory for the content of the movies, whereas inserting such markers at inappropriate points can impair such memory.

The consequences of event segmentation for later memory can be seen not just in experimental manipulations, but also in individual differences. In one study (Zacks, Speer, Vettel, & Jacoby, 2006), viewers segmented movies of everyday activities and later completed tests of memory for the visual details and temporal order in the movies. The participants included younger adults, healthy older adults, and older adults with very mild dementia. Whereas memory was uniformly good in the younger adults, memory in the older adults was less good and quite variable—particularly for those with very mild dementia. For older participants, segmentation was variable as well—some identified event boundaries in the same locations as the younger adults, but others’ segmentation was idiosyncratic. Segmenting in a normative fashion was strongly associated with better later memory for temporal order and visual details. For visual memory, this association remained after accounting for the presence of dementia and for psychometrically-assessed cognitive ability. That is, given two otherwise comparable individuals, the one that segmented the activity more normatively remembered more.

In sum, event segmentation appears to be a central ongoing component of perception and comprehension. It functions as a form of attention by regulating how processing resources are deployed over time. It is important for working memory updating and long term memory encoding. EST provides one account of how all this happens. The account is sure to require revision, but even in its current state it provides a potentially powerful tool for investigating a number of applied problems.

Applications to clinical neuroscience, technology, literature and art

We have already seen that there are substantial individual differences in event segmentation. In addition, there are a number of clinical conditions that can affect event segmentation. In the study just described (Zacks, Speer, Vettel & Jacoby, 2006), older adults with very mild dementia had poorer event segmentation and memory than their neurologically healthy counterparts. Deficits in event segmentation also have been observed in patients with frontal lobe lesions (Zalla, Pradat-Diehl, & Sirigu, 2003) and schizophrenia (Zalla, Verlut, Franck, Puzenat, & Sirigu, 2004). These deficits can be understood in terms of the neural mechanisms proposed by EST (Zacks et al., 2007). One exciting possibility is event segmentation may provide a target for intervention in these disorders.

There are also a number of potential applications to information technology. Interfaces designed to teach procedures or scientific processes may benefit from explicitly representing the event structure of the activity for the learner (Zacks & Tversky, 2003). Psychologically adaptive segmentation may provide an efficient way of summarizing large databases of video or multimedia for search and editing (Christoffersen, Woods, & Blike, 2007). Identifying event boundaries may be helpful in scheduling interruptions in the context of tasks such as piloting, driving, or operating machinery.

Finally, event segmentation may provide a powerful lens through which to view art and literature. One important thing that cinema, television, and literature do is represent events. Some basic features of these ubiquitous media are still poorly understood. For example, how is it possible that a film can cut from one time and place to another, instantaneously changing all the information in the visual field, without disorienting the viewer (Münsterberg & Griffith, 1916/1970)? One possibility is that the perception of events regulates how cuts are perceived and which sorts of cuts “work” (Zacks & Magliano, in press). What does a reader retain over the reading of an extended novel (Copeland, Radvansky, & Goodwin, 2009; Radvansky, Copeland, & Zwaan, 2005)? The behavioral and neurophysiological data suggest that readers construct event representations that are segmented according to the same mechanisms as govern the segmentation of live action (Speer et al., 2009; Zacks et al., 2009). Thus, the chunking of experience into events may enable disparate artistic forms to convey experience.

Acknowledgements

The research described here was conducted in collaboration with the Dynamic Cognition Laboratory, whose current members are Heather Bailey, Ed Bryant, Risa Eilbaum, Nayiri Haroutunian, Chris Kurby, Hanisha Manickavasagan, and Jesse Sargent. It was supported by grants from the National Institutes of Health (RO1-MH70674 and R01-AG031150) and the National Science Foundation (BCS 0236651).

References

Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4, 829-839.

Barsalou, L. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645.

Boltz, M. (1992). Temporal accent structure and the remembering of filmed narratives. Journal of Experimental Psychology: Human Perception & Performance, 18, 90-105.

Christoffersen, K., Woods, D. D., & Blike, G. T. (2007). Discovering the events expert practitioners extract from dynamic data streams: The mUMP technique. Cognition, Technology, and Work, 9, 81-98.

Copeland, D., Radvansky, G., & Goodwin, K. (2009). A novel study: Forgetting curves and the reminiscence bump. Memory, 17, 323-336.

Elman, J. L. (2009). On the meaning of words and dinosaur bones: Lexical knowledge without a lexicon. Cognitive Science, 33, 547-582.

Enns, J., & Lleras, A. (2008). What's next? New evidence for prediction in human vision. Trends in Cognitive Sciences, 12, 327-333.

Hard, B. M., Tversky, B., & Lang, D. (2006). Making sense of abstract events: Building event schemas. Memory & Cognition, 34, 1221-1235.

James, W. (1890). The principles of psychology (Vol. 1). New York: Henry Holt.

Köhler, W. (1929). Gestalt psychology. New York: H. Liveright.

Kurby, C. A., & Zacks, J. M. (2008). Segmentation in the perception and memory of events. Trends in Cognitive Sciences, 12, 72-79.

Landauer, T. K. (1986). How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cognitive Science, 10, 477-493.

Maia, T. V. (2009). Reinforcement learning, conditioning, and the brain: Successes and challenges. Cognitive, Affective, & Behavioral Neuroscience, 9, 343-364.

Münsterberg, H., & Griffith, R. (1916/1970). The film, a psychological study : the silent photoplay in 1916. New York: Dover Publications.

Newtson, D. (1976). Foundations of attribution: The perception of ongoing behavior (pp. 223-248). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Newtson, D., Engquist, G., & Bois, J. (1977). The objective basis of behavior units. Journal of Personality and Social Psychology, 35, 847-862.

Radvansky, G., Copeland, D., & Zwaan, R. (2005). A novel study: investigating the structure of narrative and autobiographical memories. Memory, 13, 796-814.

Reynolds, J. R., Zacks, J. M., & Braver, T. S. (2007). A computational model of event segmentation from perceptual prediction. Cognitive Science, 31, 613-643.

Rinck, M., & Bower, G. (2000). Temporal and spatial distance in situation models. Memory and Cognition, 28, 1310-1320.

Schwan, S., Garsoffky, B., & Hesse, F. W. (2000). Do film cuts facilitate the perceptual and cognitive organization of activity sequences? Memory & Cognition, 28, 214-223.

Speer, N. K., Reynolds, J. R., Swallow, K. M., & Zacks, J. M. (2009). Reading stories activates neural representations of perceptual and motor experiences. Psychological Science, 20, 989-999.

Speer, N. K., Reynolds, J. R., & Zacks, J. M. (2007). Human brain activity time-locked to narrative event boundaries. Psychological Science, 18, 449-455.

Speer, N. K., & Zacks, J. M. (2005). Temporal changes as event boundaries: Processing and memory consequences of narrative time shifts. Journal of Memory and Language, 53, 125-140.

Swallow, K. M., Zacks, J. M., & Abrams, R. A. (2007). Perceptual events may be the "episodes" in episodic memory. Abstracts of the Psychonomic Society, 12, 25.

Swallow, K. M., Zacks, J. M., & Abrams, R. A. (2009). Event boundaries in perception affect memory encoding and updating. Journal of Experimental Psychology: General, 138, 236-257.

Whitney, C., Huber, W., Klann, J., Weis, S., Krach, S., & Kircher, T. (2009). Neural correlates of narrative shifts during auditory story comprehension. NeuroImage, 47, 360-366.

Zacks, J. M. (2004). Using movement and intentions to understand simple events. Cognitive Science, 28, 979-1008.

Zacks, J. M., Braver, T. S., Sheridan, M. A., Donaldson, D. I., Snyder, A. Z., Ollinger, J. M., et al. (2001). Human brain activity time-locked to perceptual event boundaries. Nature Neuroscience, 4, 651-655.

Zacks, J. M., & Magliano, J. P. (in press). Film understanding and cognitive neuroscience. In D. P. Melcher & F. Bacci (Eds.), Art and the Senses. New York: Oxford University Press.

Zacks, J. M., Speer, N. K., & Reynolds, J. R. (2009). Segmentation in reading and film comprehension. Journal of Experimental Psychology: General, 138, 307-327.

Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., & Reynolds, J. R. (2007). Event perception: A mind/brain perspective. Psychological Bulletin, 133, 273-293.

Zacks, J. M., Speer, N. K., Vettel, J. M., & Jacoby, L. L. (2006). Event understanding and memory in healthy aging and dementia of the Alzheimer type. Psychology & Aging, 21, 466-482.

Zacks, J. M., Swallow, K. M., Speer, N. K., & Maley, C. J. (2006). The human brain's response to change in cinema. Abstracts of the Psychonomic Society, 11, 9.

Zacks, J. M., Swallow, K. M., Vettel, J. M., & McAvoy, M. P. (2006). Visual movement and the neural correlates of event perception. Brain Research, 1076, 150-162.

Zacks, J. M., & Tversky, B. (2003). Structuring information interfaces for procedural learning. Journal of Experimental Psychology: Applied, 9, 88-100.

Zalla, T., Pradat-Diehl, P., & Sirigu, A. (2003). Perception of action boundaries in patients with frontal lobe damage. Neuropsychologia, 41, 1619-1627.

Zalla, T., Verlut, I., Franck, N., Puzenat, D., & Sirigu, A. (2004). Perception of dynamic action in patients with schizophrenia. Psychiatry Research, 128, 39.

Zwaan, R. A. (1996). Processing narrative time shifts. Journal of Experimental Psychology: Learning, Memory, & Cognition, 22, 1196-1207.

Zwaan, R. A. (2004). The immersed experiencer: Toward an embodied theory of language comprehension. In B. H. Ross (Ed.), The Psychology of Learning and Motivation (Vol. 44, pp. 35-62). New York: Academic Press.