My completely informal and largely anecdotal observations of academic life have led to an “assessment type theory” — namely, there are two types of college professors: those who have assessment fever, and those who want no parts of it. I can assure you that if I were to analyze this mental data, results would be dramatic: upwards of 70 percent of academics — even at teaching institutions — would fall into the latter group. The other 30 percent (and this is a major increase in just the past few years) are what I’d call “believers” — planning sabbaticals to Alverno, volunteering as the department’s “assessment person,” arguing with university curriculum committees that curricular change should be based on learning outcome data, etc. And these types are discreet — few academics, if any, are assessment ambivalent.
I am an academic administrator responsible for the university-wide academic assessment of programs and curricula across a diverse and vibrant institution. As an early adopter of learning outcome assessment, I am quite familiar with hitting the brick wall when attempting to engage colleagues in assessment. Here are some common responses:
- I don’t have time to do assessment/it’s too hard/there’s no compensation for this work.
- I don’t need to assess student learning, I “know it when I see it.”
- I assess students all the time — grades are assessment.
- Assessment is meaningless busy work; I’ve got better/more important things to do.
Assessment coordinators will recognize these statements, and I am sure will add a few more. Assessment coordinators have tried all the tricks to engage colleagues — cajoling, arguing, promising and threatening. Carrots and sticks might work with some animals, but as psychologists, we know they don’t produce great results with people, whose motivations are multifaceted and complex. Luckily, psychology is the science of behavior and behavioral change. I strongly believe that academics in the latter camp can be brought into the former; that is, assessment fever can be catching. Cognitive approaches don’t engage the motivation that makes one buy into the value of assessment. My approach is to appeal to a fundamental identity characteristic of social scientists: our empiricism, our belief in science.
Here’s a case study to illustrate. (Disclaimer: case loosely based on actual events; names and topics changed to protect the innocent). I was working on an accreditation review, and needed to engage a social scientist in program assessment. This colleague was the tenured chair of a busy department, carried a 3/3 undergraduate load, advised in the major, and was writing a book — all predictors of his response to the assessment call, which was “I’m just not going to do it” (implied: you can’t make me). Brick wall. So, argue, cajole, promise or threaten? None of the above — instead, I appealed to his identity as a researcher. “Ok, I hear you. So, no program assessment. Instead, let’s do something else. I have heard you lament, many times, that your Intro students are not picking up key aspects of psychology — and that often, majors cannot communicate these effectively in senior courses.” “That’s true,” he said. “Well, is there one aspect you teach — let’s call it a learning goal — that you think is particularly important for majors that they aren’t absorbing?” “Students have a hard time developing a psychological perspective — the idea that psychology is a science, and that scientific reasoning can help interpret behavior. They stick on superficial and uninformed critiques of the world, based on the experience of that one uncle they have, or what they saw last night on TV.” “Hmmmm. Ok, let’s map out the curriculum, from Introductory Psychology to Senior Seminar, and find in which courses your students encounter and absorb this learning goal. Then, let’s design measures to test how well course activities produce the outcome you want — students with well-developed psychological perspectives. We’ll create specific objectives that operationalize the goal, and embed them at the introductory level — perhaps having students recognize or show awareness of the perspective? Then, in later courses, we’ll design activities that get at the objectives with greater complexity — like comparing and contrasting the psychological approach with approaches to knowledge in other disciplines, and or even communicating their own rudimentary research ideas, in writing or multi-media?” Now, I could see that I had him — assessment fervor was beginning to show in his eyes. “Huh — that actually sounds interesting, and not that much work at all — we already have all that stuff. I’ve love to find out if, and how well, my students are getting this.”
From there, my work was easy. He was engaged and interested in a research project on learning, and spearheaded a learning outcome assessment that accreditors held up as a model. He transformed from a nay-sayer who didn’t believe in assessment to a researcher passionate about using data to improve student experiences and outcomes. By the time the accreditors arrived, he had assessment fever. After they left, I broke the news — he’d been doing assessment. “You know, if you’d told me years ago that assessment is just research about learning, we wouldn’t have been arm-wrestling!”
About the author
Provost Carlota Ocampo, PhD, oversees academic affairs, assessment and planning at Trinity Washington University in Washington, D.C. Her previous appointments included associate provost for academic assessment and associate dean of the College of Arts and Sciences, where she directed academic advising and the first-year experience. Ocampo joined the Trinity community in 1997 as assistant professor of psychology and earned tenure and promotion to associate professor in 2003. She has also served as chair of psychology and of the human relations program.
Ocampo received her PhD in neuropsychology from Howard University in 1997. Her teaching and research interests encompass interactions among diversity, oppression and health. She has published on pedagogical reform with changing student populations, racist-incident based trauma, and ethnicity, gender and disease; her current research examines use of mobile health technologies with community health clients. In 2014, she was appointed to the APA’s Board of Educational Affairs working group on introductory psychology assessment. She serves as a peer evaluator for the Middle States Commission on Higher Education and is a member of the Leadership Institute for Women in Psychology class of 2011. She enjoys ensuring student success, working in higher education and culturally relevant health.

