Perspective: Are Racial Biases Being Coded Into AI Algorithms?
Partners Population Health Perspectives blog aims to share the opinions and ideas of our leadership team on health care topics related to population health management. Christine Vogeli, PhD, Director of Evaluation and Research for Partners Population Health, evaluates our care transformation efforts and supports researchers from across the Partners system studying health care quality and value. Her team’s efforts not only shape our internal value-based care implementation strategies, but have also helped generate academic publications that inform population health strategies across the globe. In this post, she shares her thoughts as a contributing author on a recent publication that examines racial bias in high-risk algorithms.
As a large health care provider, Partners HealthCare is continuously thinking of ways in which we can improve the way we identify and address our patients’ health care needs. Historically, physicians would diagnose patient’s medical problems and identify their care needs by meeting with patients, examining their symptoms, and looking at the results of diagnostic and lab tests. But more and more, health care organizations are using the data they have to help improve their ability to accurately diagnosis and pinpoint their patients’ needs.
For example, computers now help radiologists identify cancers and have opened the door to amazing advances in targeted treatments. Computers can also help to predict which patients may be at risk for diseases and allow doctors to more closely monitor or adjust medications for patients at high risk of a disease. Computers can also help doctors predict which patients might benefit from more support, such as a Partners care management program.
But any prediction is just that, a prediction. Just as the weather forecast is not always right, prediction algorithms don’t always hit the mark. That’s why it’s important for health care organizations to learn, adjust, and build in safeguards as they adopt new tools. I was recently a contributing author on a study published in Science that shows both the promise and potential issues with using algorithms to identify patients for interventions such as care management. We found that a commercially available algorithm, on average, gave white patients higher risk scores than black patients despite similar level of disease.
The reason is that (on average) black patients tend to go to doctors and use medical care less frequently than white patients – either because they prefer not to, or more troubling, because of difficulty accessing health care due to barriers such as the expense, issues with scheduling, or lack of trust in medical care. Because the commercial algorithm used information about past health care use and costs in addition to information about patients’ chronic conditions, black patients were, on average, assigned lower risk scores. The paper highlights the need to understand the ingredients in any algorithm, to adopt new tools with caution, and to develop safeguards when possible.
At Partners, we added a number of safeguards to how we used the commercial tool studied. While we were using the tool (we have since transitioned to a different high-risk algorithm) only 15% of patients identified as “potentially appropriate” for high-risk care management were identified using only their high-risk score. The remainder were identified using information on their chronic conditions in combination with information on patterns of health care use–such as high numbers of emergency room visits, a large variety of specialist physician visits, or missed primary care appointments. These are situations where care managers are particularly helpful, as they can help to coordinate care and address work or life issues that make it hard for patients to keep appointments with their regular providers.
As a second safeguard, the Partners high-risk care management program asks primary care providers to review the lists of potentially appropriate patients and select the patients who they feel could best be helped by the program. On average, our primary care doctors pick about half of the patients on the list, and add in another 10-15% of patients who were not on the list. In the end, we found that patients who were selected for care management services by Partners were more likely to come from poorer neighborhoods and were more likely to be minority patients.
Research and evaluation are cornerstones of effective care transformation. As health care organizations adopt new tools, such as algorithms, they need to pause and consider potential biases, build in safeguards, and study the impact. Partners is proud to be part of the Science study team that examined bias in the commercial algorithm we were using. As a result of the findings, we have changed our approach, and now use a commercial algorithm that only uses information on patients’ chronic conditions (rather than historic costs and utilization) to determine a patient’s risk score. We will continue to work with researchers to study and learn as we move forward – and share findings as a way of continually improving our population health programs and the care we provide our patients.
Dr. Vogeli is the Director of Evaluation and Research within Partners Population Health and a scientist and faculty member within the MGH Mongan Institute and Harvard Medical School. Her principal interests include primary care, disparities and healthcare quality. As an embedded investigator, she is involved in the design, implementation and evaluation of Partners Population Health care delivery innovations. She is an expert in large administrative databases, including claims, financial and clinical data, and the use of these data to prospectively identify patients, and measure patient outcomes. Dr. Vogeli has collaborated on research grants funded by PCORI, The Commonwealth Fund, The Robert Wood Johnson Foundation, the Agency for Healthcare Research and Quality, and others.