Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

Preventing children from dying in hot cars

Preventing children from dying in hot cars

One of the biggest lessons I learned a decade ago in public-health graduate school was that education was rarely enough, on its own, to fundamentally change behavior. Educating the public about health was “necessary but not sufficient,” as one of my epidemiology professors had put it. Weight loss, smoking cessation, safe sexual practices — education campaigns weren’t enough.

Decades of educating the public about the dangers of leaving children unattended in cars where the temperature can turn deadly — even on a sunny but not especially hot day — clearly have not been sufficient. The deaths of 11-month-old twins on July 26 in a hot car in the Bronx have brought a fresh sense of urgency to finding innovative technology solutions.

But even before that tragedy, bills had been introduced in Congress earlier this year to address the rising incidence of young children dying in overheated cars.

According to the No Heat Stroke organization, which tracks pediatric heatstroke deaths in vehicles, the average number of such deaths annually since 1998 is 38, with 53 deaths recorded last year — the most ever. Sadly, the nation appears certain to set a record in 2019, with 32 deaths already by the second week of August. The Kids and Cars safety group, another tracker, notes that “over 900 children have died in hot cars nationwide since 1990.”

Fifty-four percent of these victims are 1 year old or younger. In a little more than half of the deaths, children have been mistakenly left alone by their caregiver, in what is known as Forgotten Baby Syndrome. Other children die after climbing into hot cars without an adult’s knowledge, and others have been knowingly, sometimes criminally, left in hot cars.

The American Academy of Pediatrics recommends rear-facing seats for crash-safety reasons and last year removed the age recommendation, focusing instead on height and weight. But there is an immense irony in the safety policy: Rear-facing seats prevent the driver from occasionally making eye contact with the child in the rearview mirror, which would keep the child prominent in the adult’s mind. And when a rear-facing seat is often left in the car, regardless of whether a child is in it, the seat’s presence can be too easily taken for granted.

The father in the New York case said he had accidentally left the twins in rear-facing car seats. (A judge on Aug. 1 paused the pursuit of a criminal case against the twins’ father, pending the results of an investigation.)

As a pediatrics resident physician, I’ve seen hundreds of parents and caregivers of young children, and many are simply overwhelmed, sleep-deprived and vulnerable to making tragic errors. Some parents in high-stress professions may have an additional cognitive load, which can lead to distractions.

The American Academy of Pediatrics suggests several ways to help prevent these tragedies by retraining habits and breaking away from the autopilot mode that often sets in while driving and doing errands. But that’s not enough. The Post noted five years ago that automakers’ promises to use technology to prevent hot-car deaths went unrealized. Liability risks, expense and the lack of clear regulatory guidelines also discouraged innovation. Congressional attempts in recent years to legislate on this front have failed.

That all may be changing, given the rising number of child deaths. The Hot Cars Act of 2019, introduced in the House by Rep. Tim Ryan (D-Ohio), would require all new passenger cars to be “equipped with a child safety alert system.” The bill mandates a “distinct auditory and visual alert to notify individuals inside and outside the vehicle” when the engine has been turned off and motion by an occupant is detected.

The Hyundai Santa Fe and Kia Telluride already offer such technology, which is a welcome step in the right direction. But it would not identify infants who have fallen asleep and lie motionless; these detectors are not typically sensitive enough to detect the rise and fall of a child’s chest during breathing.

The Senate version of the hot cars bill proposes an alert to the driver, when the engine is turned off, if the back door had earlier been opened, offering a reminder that a child may have been placed in a car seat.

The development of sensors for autonomous-vehicle technology is promising — how much harder will it be to alert drivers to people’s presence inside the car, not outside? Other ideas to consider: A back-seat version of the passenger-seat weight sensor that cues seat-belt use, with a lower weight threshold to alert the driver (and loud enough for a passerby to hear) once the engine is shut off. Or try something that doesn’t rely on motion or weight — a carbon-dioxide detector that would sense rising levels (we exhale carbon dioxide, and this rises in a closed and confined space) after the engine is off, sounding an alarm while automatically cooling the vehicle.

No parent of a young child is immune to Forgotten Baby Syndrome — we are all capable of becoming distracted, with terrible consequences. Those who have been devastated by such a loss deserve our sympathy, not our scorn. To avoid future such tragedies, applying technical innovation to passenger vehicles is essential.

**Originally published in the Washington Post**

AI Could Predict Death. But What If the Algorithm Is Biased?

AI Could Predict Death. But What If the Algorithm Is Biased?

Researchers are studying how artificial intelligence could predict risks of premature death. But the health care industry needs to consider another risk: unconscious bias in AI.

 

Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.

The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.

There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the “neural inputs” embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.

Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor’s gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)

In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.

When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.

Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.

We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it’s an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they’re trained on.

Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.

Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.

 

**Originally published in Wired**

Innovation

lightbulb in sand