skip to main content
Preventing bias

Preventing bias in algorithms to detect suicide risk

Cite this
American Psychological Association. (2021, August 25). Preventing bias in algorithms to detect suicide risk. Monitor on Psychology, 52(6). https://www.apa.org/monitor/2021/09/sidebar-preventing-bias

computer motherboard graphic

The rise of machine-learning has raised hopes that artificial intelligence (AI), with its skill at picking out patterns in complex data sets, might do a better job than clinicians at assessing suicide risk. Applying AI programs to medical records is still a new endeavor, and algorithms are generally better at predicting who won’t die by suicide (almost everyone in these data sets) than predicting who will (a much smaller and more elusive group).

A new study raises a red flag that, if not researched and deployed carefully, these algorithms could end up doing more harm than good. The study tested two algorithms designed to predict suicide deaths within 90 days of a medical visit, based on an analysis of patients’ electronic medical records. Among those gauged to be in the top 5% of suicide risk, the first algorithm correctly identified almost half of the suicide deaths among White patients, and the second identified 41% (JAMA Psychiatry, Vol. 78, No. 7, 2021). But both performed abysmally with patients of color. The first algorithm correctly identified only 7% of Black and American Indian/Alaska Native patients in the top 5% of risk who would go on to die by suicide, and the second correctly identified only 3% of Black patients and 7% of American Indian/Alaska Natives in that risk category who died by suicide.

Part of the disparity is mathematical, said lead author Yates Coley, PhD, a biostatistician at the Kaiser Permanente Washington Health Research Institute. Any algorithm will be better at making predictions on larger data sets, and there were more White patients in the medical system than Black and Indigenous people of color (BIPOC) patients. But layered upon that problem is the issue of structural racism: BIPOC populations have less access to mental health care and thus fewer records of their struggles, Coley said. “Even when BIPOC populations have access to mental health care, they are less likely to be diagnosed and treated appropriately, which means that health record data don’t accurately reflect disease severity,” she said.

The research makes clear the importance of testing, model by model, whether algorithms reinforce health disparities, Coley said. “Clinical implementation of the suicide prediction models we examined would exacerbate existing disparities in mental health access, treatment, and outcomes for Black, American Indian, and Alaska Native patients,” she said.

Related news feature