Testimony of David Woods, PhD

On behalf of the
American Psychological Association
before the
United States House of Representatives
Committee on House Administration
The Honorable Bob Ney, Chairman
Changes in the Election Process

Good morning Mr. Chairman, Congressman Hoyer, and members of the Committee. My name is David Woods and I am a human factors psychologist and professor at Ohio State University and Past-President of the Human Factors and Ergonomics Society. Human Factors psychologists and engineers study the interaction or interface of people and technology in the workplace, very often in high risk settings like aviation, the military, nuclear power, space mission operations, and medicine. For example, I was one of many Human Factors researchers who studied how pilots and computers work together and sometimes fail to work together on the flight deck. New technology is only one ingredient in improved capability and reliability in these fields. The other ingredient is Human Factors studies, user-centered design, and usability engineering.

November, 2000 was a vivid time in all of our lives in the heartland as well as within the beltway. The intense debate following the electoral surprise and crisis paralleled other debates I have been part of following surprising accidents in nuclear power, aviation and health care—assigning blame. Some people argued it was a "voter error" problem: ‘They should have been able to follow the arrows.’ ‘I am more careful and wouldn’t have done that.’ Others commented on antiquated, imprecise technology such as punch cards. Many of the young people caught up in the controversy were quite intrigued, having never seen these kinds of things, except as relics of their parents ancient history.

My field of Human Factors studies the interaction of people and devices, people and computers, and we study how these systems sometimes fail, including how we can learn from these events to improve our systems. What has our science learned that can be applied to election technology, interfaces and systems?

First, the difficulties we witnessed last November are not simply voter error, but rather system issues in user-device interaction. These human-device and human-computer interaction issues apply to election officials tabulating results as well as to voters.

Second, the difficulties we witnessed last November cannot be solved simply by replacing antiquated equipment because replacement systems can exhibit poor user-device interaction that result in predictable risks of error.

Third, many of the user-device and user-computer issues can be addressed by basic, ‘bread and butter’ usability engineering and testing techniques. Usability engineering can help now if there is an investment process to bring the basic knowledge to federal, state and local election officials.

Fourth, there are unique aspects to voting that create potentially difficult design decisions and tradeoffs that require careful consideration and longer term investment.

We can make no progress if we play the blame game after-the-fact of either dumb users or antiquated equipment. Instead, we have to look at the integrated system of people interacting with a device to accomplish their goals, in this case, registering their preferences for political offices and other public policy issues. Luckily, the difficulties made visible in the last election point to basic, well understood issues in the design of devices to enhance usability and accuracy.

Improvement involves much more than simply replacing antiquated technology. Unfortunately, buying a vendor’s latest model or bringing in computer interfaces will not make all of the problems revealed by the Florida crisis go away. The kinds of problems we saw in the last election can apply to any kind of human-device interaction, whether that device is a mechanical device or a computerized device.

There is a mature research base on user-centered design because researchers on human-device and human-computer interaction have studied situations with many similarities to the voting context. We have worked out principles for how to prevent errors through the layout and design of devices, for example, in training military personnel as technicians and troubleshooters. Techniques for usability testing of prototype designs have matured in the software industry. Another piece of good news is that these usability methods can be done quite economically to fit the requirements of organizations under budget pressure and to help make quick decisions about what is most likely to work and where to invest limited resources to make the biggest impact.

These results from the field of Human Factors point to a couple of absolutely critical issues to make any human-device interaction work effectively. One fundamental issue is -- provide feedback to users. Give people feedback in their interactions with a device so that they can see the results of their actions, recognize problems, and correct them.

This same principle of good feedback extends to the equipment used in the tabulation process and all of the election officials who are involved in the tallying process (and potentially the recounting process) as well – provide a visible audit trail.

It is important to remember that the oldest technologies include a visible physical layout of information, action, and feedback which brings some important design benefits at low cost. When you use a paper ballot, voters make a positive mark. Whereas when we shift to punch cards it violates an old rule of thumb. It is generally bad design to use the absence of something (the hole) to be the indicator of the presence of some important state we want to track (the vote)—what we call coding by absence. When this simple old rule of interface design is violated, some difficult situations can arise as we witnessed in the recounting process and debates over criteria about hanging and dimpled chads. Or consider lever systems: the physical lever moves, and we get visible feedback about our choice. We also get a direct cue as to how to change it. We simply change the lever position.

With computer technology, you can design electronic voting systems in many different ways with many different potential benefits and pitfalls. You may attempt to copy old paper or lever systems. For example, this was the way electronics and computer displays were first introduced into the cockpit. The designers tried to copy the old knobs and dials, and it didn’t work very well. The freedom computer based systems gives to designers provides the power to design voting and tabulation systems in many different ways, but this imposes a responsibility to think through all the different functions you want to accomplish and all of the different ways trouble could arise. Doing this requires usability testing and consideration of different basic issues in human-computer interaction such as layout, legibility (especially for older populations), feedback, recognizing and correcting mis-entries (can we back up or recover from mis-entries), and handicapped populations.

The technology of user-centered design and usability engineering is readily available to help federal agencies, states, and local election officials make purchasing and design decisions that will avoid these kinds of election crises in the future. We only need a mechanism to bring that knowledge base to bear in the case of voting technology. Independent organizations such as national laboratories and universities have groups with the necessary expertise to quickly provide guides to the human factors of voting and tabulating systems.

However, there is a need for careful consideration of how to use the possibilities of new technology over the longer term. Balancing security and visible feedback, providing wide access across diverse and aging populations despite only occasional use, handling large numbers of issues/ballot choices in a timely fashion, supporting recovery from mistakes, and doing it all at low cost are formidable design constraints. Plus, moving to new technologies and computerization raises new issues, new difficulties and new risks of inaccuracies.

How will people with various disabilities be accommodated by computer interfaces? The standard graphical computer interface is not well suited for those with visual impairments. A great deal of work is going on with alternative interface modalities such as sound and touch to enhance the disabled’s access to electronic resources. Standards in the context of voting are only beginning to emerge.

Adopting new technology for voting can lead to new risks for inaccuracies. Average levels of imprecision or inaccuracy may drop, but there are risks that spikes of inaccuracies can occur, especially with computerized systems.

Another danger with new electronic systems is that you can give people false confidence, false feedback about what is on the front of the panel when what is actually happening behind the panel is invisible and inscrutable. That is why the election official interface is also important. What forms of feedback and audit trials are needed? How do we build in monitoring checks to be sure hidden spikes of inaccuracy haven’t occurred?

One of the unanswered questions from last November is, what is a recount? I would submit to you that, as we change technology and adopt different technologies across different states and localities, we have to think though and decide how do we want to carry out the recount process? And perhaps we even need to consider what does a recount mean with the different kinds of technologies we use to register and tabulate votes?

From past research, and unfortunately from a few terrible tragedies involving computer interfaces in health care, we find that vendors claims for failure proof designs merit skepticism. As the humorist Douglas Adams quipped, "The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair." Design that takes into account the possibility for error and unanticipated situations is a fundamental part of human centered design. Computerized voting and tabulation systems must support our ability to check and detect if there are spikes of inaccuracy.

It easy to rationalize away the need for action – hyper-close elections are rare; this precinct didn’t have well publicized problems; we only had the usual error rate. But, as in many celebrated failures in high risk industries, we now find that fundamental inaccuracies in registering and tabulating votes have been present in our election equipment and processing all along-- smaller scale "dress rehearsals" for the Florida crisis. However, it took the events of last November to change how we interpret the previous discrepancies.

The Chicago Tribune concluded that the error rate in Cook County in the last presidential election had doubled to 6%. I am shocked that we seem so willing to tolerate 3% failure rate as a norm. Where in business, transportation, or medicine would we tolerate such failure rates? True no one is injured or dies over these poor designs as can happen in the operating room or the cockpit, but voting is the centerpiece of democracy. We need to establish systems to monitor for the early warning signs that inaccuracies or systematic errors are creeping into our voting system.

In closing I would like to remind you that technology alone is not sufficient. The system of people and technology harnessed to fulfill the ideals of the democratic process calls us all to make a commitment to excellence.