APA President Alan E. Kazdin, PhD, the John M. Musser professor of psychology and child psychiatry at Yale University, knows whether his students have mastered the content of a course such as research methods. But what he's really interested in is whether he has ignited their passion for psychology or for learning itself. The problem?
"There are no really good measures" for assessing that long-term impact, he says, noting that the course content almost doesn't matter in the fast-changing world in which we live.
That is one of the many challenges of evaluating educational effectiveness. But as external pressures to show accountability mount, psychologists are drawing on their expertise to come up with solutions--developing learning outcome measures for liberal education, hosting a national conference on undergraduate education in psychology and creating performance benchmarks for psychology departments.
"The challenges before education at all levels are squarely in the domain of psychology," says Kazdin. "Psychologists are the experts on evaluation."
While the push to show effectiveness began in the K-2 realm, it has now spread to colleges and universities. The cost of higher education is one factor driving that trend, as are concerns about national competitiveness, says Kazdin. "We want to know how we are doing in key topics related to others around the world, and that requires some standardization," he says.
The regional agencies that accredit undergraduate institutions, such as the Southern Association of Colleges and Schools' Commission on Colleges, are also demanding proof that students at colleges and universities are meeting learning outcomes.
The trend worries Robert J. Sternberg, PhD, dean of the School of Arts and Sciences at Tufts University in Medford, Mass. Sternberg, a board member of the Association of American Colleges and Universities (AAC&U), who outlined some of the challenges related to accountability at the association's annual meeting in January.
"As the stakes of accountability get higher, people more and more teach to tests," Sternberg says, noting that that's been the effect of the No Child Left Behind Act. "Part of the challenge is that we not end up with the kinds of trivial tests we have sometimes at the elementary and secondary levels. One could argue about whether they're adequate for those levels, but for sure they're not appropriate for the college level." The cost of testing is another concern, says Sternberg.
And faculty members don't always appreciate the push to prove their effectiveness, adds Sternberg.
"They don't want to feel that someone's imposing a set of external standards on them," he says.
The fact that the academic promotion system doesn't encourage involvement in assessment efforts is another factor behind faculties' reluctance, says psychology professor Bill Hill, PhD, director of the Center for Excellence in Teaching and Learning at Kennesaw State University in Kennesaw, Ga.
"It's a lot of work for faculty, and there's no real reward," he explains. "Many administrators and even their colleagues see this not as research but as service, and service doesn't get the credit that research and excellence in teaching do."
To overcome these challenges, professors need to start measuring educational success by their own yardsticks, says Sternberg. At Tufts, for instance, a faculty committee is developing a list of the skills and attitudes they think are important for students to have and exploring ways to make sure that happens.
Good models are already out there. Sternberg points to the AAC&U's Liberal Education and America's Promise (LEAP) initiative as one example. Launched in 2005, the initiative has identified essential learning outcomes for all undergraduates.
Because the goal is not only to promote a wide-ranging education for individual students but to ensure economic competitiveness at the national level, the initiative also involves business leaders. In a 2008 LEAP report called "How Should Colleges Assess and Improve Student Learning?" for instance, employers rejected rote learning and called for increased use of senior projects, supervised internships and community-based work in lieu of multiple-choice tests.
What Sternberg likes about the model is its comprehensiveness: Instead of focusing merely on test scores and grades, he says, the LEAP approach also includes such outcomes as creativity, ethical behavior and citizenship.
"Accountability ought to be done in a way that fully reflects the outcomes you want to achieve," says Sternberg.
Robin Hailstorks, PhD, associate executive director and director of precollege and undergraduate programs in APA's Education Directorate, said the AAC&U meeting provided a forum for psychologists to discuss assessment strategies and pedagogical practices that will help educators empower college students to apply the knowledge they obtain within the discipline of psychology to real-world challenges.
"AAC&U and APA share a similar vision for what college graduates need to know in the 21st century in order to be successful," says Hailstorks. "The APA Guidelines for the Undergraduate Psychology Major is very compatible with the essential learning outcomes espoused in AAC&U's LEAP initiative." (The APA guidelines are available online)
Other efforts to improve educational accountability target psychology education specifically.
Diane F. Halpern, PhD, a psychology professor at Claremont McKenna College in Claremont, Calif., is putting together a conference she hopes will result in a blueprint for undergraduate psychology's future.
Sponsored by APA's Board of Educational Affairs (BEA), the National Conference on Undergraduate Education in Psychology will bring together educators from high schools, undergraduate programs and graduate and professional programs in Tacoma, Wash., June 22-27. They will explore the latest research on teaching and learning, new educational technology, the growing diversity of undergraduates and other developments since APA's last national conference on undergraduate education 16 years ago. The ultimate product will be an APA book.
Learning outcomes will be a major conference theme. "I think of accountability as a way to improve teaching and learning," says Halpern.
Individual psychology programs are also evaluating what their students are actually learning.
"Some programs are doing this work voluntarily because they're interested in it; others are doing it under the gun," says Hill. Many departments have adopted the learning outcomes outlined in the APA Guidelines for the Undergraduate Psychology Major. Endorsed by APA's Council of Representatives in 2006, the guidelines were the work of a BEA task force led by Jane Halonen, PhD.
While the BEA guidelines provide general guidance, a recent article in the American Psychologist (Vol. 62, No. 7) lays out a set of specific quality benchmarks developed by psychologists Dana Dunn, PhD, Maureen McCarthy, PhD, Suzanne Baker, PhD, and others (see March Monitor). The benchmarks allow departments to grade themselves on such areas as student learning outcomes, the program's "climate" and its resources and administrative support.
"If I'm serious about producing quality education in psychology, I need to have a definition in my mind of what quality is," says Hill. "Then I need to ask the question, 'Was I successful?' That's the scholarship of teaching and learning."
Rebecca A. Clay is a writer in Washington, D.C.
Letters to the Editor
- Send us a letter