An Interesting Career in Psychological Science: Vice President of the College BoardBy Wayne Camara
PhD (1986) - Industrial/Organizational and Educational Psychology
University of Illinois at Urbana-Champaign
Vice President of Research and Analysis
The College Board
New York City, New York
To many people, psychological and educational tests appear only to create barriers or to inform them that they lack skills or knowledge that they in fact believe they possess. Results from tests are often at odds with our own self perceptions and belief systems. For example, test results may inform us that our children are well above average within their classrooms, but average in comparison to other students in a state or nation. Test results can inform us that we have not mastered all the content in a field where certification is required or that we lack the skills to be placed into an advanced training program or college course. As we age, test results may also inform us that we no longer have the reaction times to drive a car safely.
Tests engender significant emotional reactions from many individuals and groups, yet there is a strong research base that supports their validity, utility, reliability and fairness in education and psychology. As an undergraduate psychology student at the University of Massachusetts at Dartmouth I became interested in how assessments could provide empirical evidence that supports and informs clinical or educational judgments during a course on Psychological Testing. I came to understand how information from standardized tests, when used appropriately, could supplement professional judgment and provide a means of comparing individuals to other individuals (a normative purpose), to themselves (a longitudinal purpose), or against a set of established criteria or standards (a criterion-related purpose). In addition, tests served important descriptive and inferential purposes.
Throughout my education I have always been interested in measurement and evidence that could be used in evaluation. I completed a master’s degree in educational measurement and then a certificate of advanced graduate study in school psychology at Rhode Island College and worked for two years as a school psychologist in West Bridgewater, Massachusetts. I administered a large variety of cognitive, personality, and projective tests to students, developed individualized educational plans, and developed psychosocial diagnostic reports for school-aged children who were referred for assessment due to educational or behavioral issues.
After just two years of employment as a school psychologist I enrolled in a Ph.D. program at the University of Illinois at Champaign-Urbana. My course of study included a unique combination of foundation and quantitative courses in both industrial-organizational psychology and educational measurement. Again, my primary interests were the efficacy and validity of tests and assessment in decision making. My master’s thesis evaluated a variety of predictors used in employment selection, and my dissertation involved developing an adaptive screening assessment for applicants to the state civil service. Courses that focused on personnel selection, validation, educational testing, computer-based testing, and measurement theory were the core of my graduate program. Gaining proficiency in statistics and quantitative methods was just as essential.
I began my career at Human Resources Research Organization (HumRRO), in Alexandria, Virginia, where I had the opportunity to work on a number of extremely interesting research projects including job analysis studies, identifying personality and other non-cognitive factors that were associated with success in entry-level managers, and examining the validity of the military’s testing program for a variety of uses in schools and with military entrance. I devoted a substantial amount of time to writing responses to federal and state government requests for proposals, which are the primary sources of funding for such educational research organizations. I gained an enormous amount of practical experience in budgeting, staffing, and pricing of research and technical services, areas that are not taught in most graduate programs. However, my training in research design, sampling, and research methodology were invaluable for this type of work.
The primary deliverables of all research organizations are written reports and oral presentations. Psychologists and psychometricians who work in such organization must communicate clearly to a wide range of audiences. Efficiency in writing was just as important as communicating clearly and it is not unusual to be expected to produce lengthy technical proposals in a few weeks while managing large research studies. Unlike graduate school, researchers may be working on several different problems with several different teams of researchers under very tight deadlines. The ability to present the technical material in oral presentations and handle questions under fire is an essential skill for success.
I then held a variety of positions in the Science Directorate of the American Psychological Association (APA), from Director of Testing to Associate Executive Director of the Science Directorate. During that time, I was responsible for the development of testing and science policy within APA and influencing federal policies on testing provisions of the Civil Rights Act, Americans with Disabilities Act, and the Polygraph Protection Law. In this role I devoted a substantial amount of time and effort demonstrating the utility of tests to a wide range of audiences.
In my current position as Vice President of Research and Analysis at the College Board, I am involved in the scientific, educational, public policy, media relations, and business aspects of large-scale testing programs such as the Advanced Placement (AP) program, PSAT/NMSQT, and SAT. I supervise approximately 75 professionals, half of whom hold a doctoral degree in psychometrics, educational psychology, or other related areas. Research psychologists and psychometricians at the College Board, and many other similar testing organizations, are responsible for designing and conducting research on a wide variety of issues that answer common questions about the fairness of tests, whether tests are coachable, and whether the ordering of test questions has an impact on student performance. We design and develop new tests such as the SAT writing test and a new eighth-grade assessment that will be parallel to the PSAT/NMSQT and SAT. We also examine data from each test to ensure scores are comparable and to pretest new items for future forms. My work combines applied research on a broad range of issues related to measurement and learning, overseeing operational testing and measurement issues for large scale testing programs, and working with a large variety of test users and stakeholders to improve and maintain our programs. One of the biggest challenges today is communicating the efficacy of tests and their appropriate role in decision making to a wide variety of audiences.
I recommend that graduate students interested in pursuing a career in testing and measurement take advanced coursework in quantitatitive psychology, psychometrics, statistics, research design, and measurement. In addition, they should become proficient in understanding and using statistical software programs such as SPSS and SAS and specialty software in areas such as item response theory. Graduate students can also benefit from summer or year-long internships with testing organizations such as the College Board, Educational Testing Service, and ACT, where they can gain important applied experience in large-scale assessment programs.