Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.
The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.
Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor’s gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)
In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.
When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.
Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.