Health

Algorithms can help us combat racial bias – if we use them wisely

Automated computer programs in health care are not all bad

January 16, 2023
Brigham and Women’s Hospital in Boston is using technology to address their own physicians’ racial bias. [Credit: Brigham and Women’s Hospital webpage]

At Brigham and Women’s Hospital in Boston, every time a physician refers a Black or Latino heart patient to a general practitioner instead of a cardiologist, the doctor’s computer will argue back. 

An automated prompt designed to make physicians think twice pops up. The message reads: “Patient is from a racial or ethnic group with historically inequitable access to the Cardiology service; consider changing admission to Cardiology.” The clinician, however, still has the final choice. 

Hospital administrators implemented this pilot program after scrutinizing nearly a decade of data and discovering that white patients showing symptoms of heart failure were directed to the cardiology unit more than their Latino or Black peers. While 67% of white patients were admitted to cardiology, only 53% of black patients were. Specialized care makes a huge difference; patients who end up in cardiology have better health outcomes.

Computer algorithms are known to reproduce the racial bias of the information they use. But Ruha Benjamin, a Princeton University sociologist and author focused on the social dimensions of science and technology, thinks algorithms can also help us be more conscious of our biases. “When we are thinking about the way forward, technology can play a role, but we really have to take seriously the social infrastructure,” she said during a recent discussion at New York University.

In the past, activists and researchers have warned that artificial intelligence can deepen health care inequalities. In 2019, a study found an algorithm commonly used in health systems underestimates how sick Black patients are and, therefore, provides them with less care than their white peers. Last year, another study found prediction models of COVID-19 severity were less accurate when using data from only one gender. 

There are a few things computer scientists should consider to ensure health algorithms don’t foster social disparities, according to Benjamin, whose book on how our individual choices can help build a better world, Viral Justice, was published this past October. First, she said, programmers must consider what is called “structural competency,” a framework based on the social determinants of health (the environmental conditions that shape a person’s health outcomes). “When we think about different disparities that exist, what are the factors that get under our skin, that enter our bloodstream, that create premature death?” 

Second, they should take into account how the physician’s own values and biases affect treatment – what she called “cultural humility.” Benjamin says discussions of culture in health care often focus on the lifestyles of patients.  “What cultural humility does is flip the gaze and say, rather than fixate on the cultural differences of patients, which might play a role but often are very stereotypical, let’s look at the values that practitioners bear in clinical interactions.” 

This is the idea behind the campaign at Brigham and Women’s Hospital: using technology to educate health practitioners about racial bias in healthcare. “Often, we talk about scientific literacy for the public and we ignore the social [and] historical literacy that is needed for the so-called experts,” Benjamin said.

Benjamin mentioned other examples in which organizations are illuminating social disparities using artificial intelligence. In New York City, for instance, the non-profit group The New Inquiry used machine learning to create a map that predicts the locations of white-collar crime. While other predictive policing apps are designed to target street crime and criminalize poverty, the “White Collar Risk Zones” map criminalizes wealth, the group says on their webpage. 

At the NYU event, Benjamin was interviewed by the public radio host Meghna Chakrabarti. “You don’t strike me as a techno skeptic,” Chakrabarti remarked at the end of the conversation. Benjamin said in order to be a “techno skeptic,” she would have to believe more in the power of technology than the people who possess it. “That gives too much power and autonomy to technology. It presumes that’s creating the problems rather than the human beings behind it,” she replied.

Correction: An earlier version of this article stated “But Ruha Benjamin, a Princeton University sociologist and author focused on the social dimensions of science and technology, thinks algorithms can also help us bypass or be more conscious of our biases.” This has been changed to “But Ruha Benjamin, a Princeton University sociologist and author focused on the social dimensions of science and technology, thinks algorithms can also help us be more conscious of our biases.” to clarify that Benjamin doesn’t think algorithms can help us bypass human bias. Updated January 26, 2023

 

About the Author

Gina Jiménez

Gina Jiménez grew up in Mexico and studied political science at the Center for Research and Teaching in Economics (CIDE). She used to work at Data Cívica, a Mexican NGO that uses data and technology to advance human rights causes. During the Covid pandemic, she discovered her interest in health journalism. Georgina enjoys reading, cooking, practicing yoga, and looking for the best breakfast food.

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

The Scienceline Newsletter

Sign up for regular updates.