Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

For the sake of doctors and patients, we must fix hospital culture

For the sake of doctors and patients, we must fix hospital culture

When hospitals fail to create a culture where doctors and nurses can speak up patients pay the price
By: Blair Bigham and Amitha Kalaichandran.

It seems too often that reporters—not doctors—sound the alarm when systemic problems plague hospitals, where whispers in the shadows indicate widespread concerns, but individuals feel unable to speak up. Recently, reports surfaced that children were dying after surgery at the University of North Carolina at higher than expected rates, despite warnings from doctors about the department’s performance. And whether in Australia, the United Kingdom, Canada, or the United States, reports show that bullying is alive and well.

This pervasive culture—where consultant doctors, residents, and other hospital staff feel that they cannot bring up critically important points of view—must change. It shouldn’t take investigative journalism to fix the culture that permits silence and bullying. But it does take all of us to rethink how physicians and leaders work together to improve hospital culture.

Investing in improving hospital culture makes a difference to patient care and the quality of the learning experience.

Recent studies on workplace culture show how important it is. In a new JAMA Surgery study, surgeons who had several reports of “unprofessional behaviour” (defined as bullying, aggression, and giving false or misleading information) had patient complication rates about 40% higher than surgeons who had none. Domains of professionalism include competence, communication, responsibility, and integrity. Last year, hospital culture was directly linked to patient outcomes in a major study led by Yale School of Public Health scientist Leslie Curry. Risk-standardized mortality rates after a heart attack were higher in hospitals that had a culture that was less collaborative and open.

Curry’s team created a programme to improve hospital culture, namely by enhancing psychological safety—a term that signifies a willingness of caregivers to speak freely about their concerns and ideas. When hospital culture changed for the better, heart attack outcomes drastically improved and death rates fell.

There are examples of good practice where psychological safety and transparency are valued, and these centres often boast better patient outcomes. A recent systematic review of sixty-two studies for instance found fewer deaths, fewer falls, and fewer hospital-acquired infections in healthcare settings that had healthier cultures.

The impact of healthcare workplace culture doesn’t just end with patient safety. Physician retention, as well as job satisfaction and teamwork, all benefit from a strong organizational culture in hospitals. This is crucial at a time where burnout in medicine is high. Hospitals can also learn from the tech industry which discovered early on that psychological safety is key to innovation. In other words, those who are afraid of failing tend not to suggest the bold ideas that lead to great progress.

So how can hospitals make improvements to their culture?

The first thing is to shine a light on the culture by measuring it. Staff surveys and on-site observations can illuminate negative workplace cultures so that boards and executives can consider culture scores in the same regard as wait-times and revenue. Regulators and accreditors could incorporate workplace culture indicators in their frameworks to increase accountability. We recently saw this in Sydney in Australia, where a third residency programme lost its accreditation due to bullying of junior doctors.

The second is to hire talented leaders not based just on their clinical competence, but also on their ability to foster inclusiveness, integrity, empathy, and the ability to inspire. By setting the “tone at the top,” leaders can influence the “mood in the middle,” and chip away at ingrained attitudes that tolerate, or even support, bullying, secrecy, and fear of speaking out.

Another solution rejects the hierarchy historically found between doctors, nurses and patients, and embraces diversity and inclusion. Effective collaboration helps shift the tribe-versus-tribe attitudes towards a team mindset. Part of this involves amplifying ideas from voices that are traditionally not heard: those of women, the disabled, and ethnic and sexual minorities. As well, leadership must change to be more diverse and inclusive, to reflect the patient population.

The field of medicine attracts motivated, intelligent, and caring people. But being a good caregiver and being a good leader are very different, and training in the latter is sadly lacking.

For every investigative report that uncovers a hospital’s culture of silence—whether it’s unacceptable bullying, unusual death rates, or pervasive secrecy—there are surely hundreds more left uncovered. The fix to this global epidemic requires deep self-reflection and a firm commitment to choose leaders who promote transparency and openness. Implicit in the physicians’ vow “to do no harm” is the vow not to stay silent as that too can be harmful. We must first and foremost create cultures that ensure we feel safe to speak up when things aren’t right. Our patients’ lives— and those of our colleagues—depend on it.

**Originally published in the BMJ**

Preventing children from dying in hot cars

Preventing children from dying in hot cars

One of the biggest lessons I learned a decade ago in public-health graduate school was that education was rarely enough, on its own, to fundamentally change behavior. Educating the public about health was “necessary but not sufficient,” as one of my epidemiology professors had put it. Weight loss, smoking cessation, safe sexual practices — education campaigns weren’t enough.

Decades of educating the public about the dangers of leaving children unattended in cars where the temperature can turn deadly — even on a sunny but not especially hot day — clearly have not been sufficient. The deaths of 11-month-old twins on July 26 in a hot car in the Bronx have brought a fresh sense of urgency to finding innovative technology solutions.

But even before that tragedy, bills had been introduced in Congress earlier this year to address the rising incidence of young children dying in overheated cars.

According to the No Heat Stroke organization, which tracks pediatric heatstroke deaths in vehicles, the average number of such deaths annually since 1998 is 38, with 53 deaths recorded last year — the most ever. Sadly, the nation appears certain to set a record in 2019, with 32 deaths already by the second week of August. The Kids and Cars safety group, another tracker, notes that “over 900 children have died in hot cars nationwide since 1990.”

Fifty-four percent of these victims are 1 year old or younger. In a little more than half of the deaths, children have been mistakenly left alone by their caregiver, in what is known as Forgotten Baby Syndrome. Other children die after climbing into hot cars without an adult’s knowledge, and others have been knowingly, sometimes criminally, left in hot cars.

The American Academy of Pediatrics recommends rear-facing seats for crash-safety reasons and last year removed the age recommendation, focusing instead on height and weight. But there is an immense irony in the safety policy: Rear-facing seats prevent the driver from occasionally making eye contact with the child in the rearview mirror, which would keep the child prominent in the adult’s mind. And when a rear-facing seat is often left in the car, regardless of whether a child is in it, the seat’s presence can be too easily taken for granted.

The father in the New York case said he had accidentally left the twins in rear-facing car seats. (A judge on Aug. 1 paused the pursuit of a criminal case against the twins’ father, pending the results of an investigation.)

As a pediatrics resident physician, I’ve seen hundreds of parents and caregivers of young children, and many are simply overwhelmed, sleep-deprived and vulnerable to making tragic errors. Some parents in high-stress professions may have an additional cognitive load, which can lead to distractions.

The American Academy of Pediatrics suggests several ways to help prevent these tragedies by retraining habits and breaking away from the autopilot mode that often sets in while driving and doing errands. But that’s not enough. The Post noted five years ago that automakers’ promises to use technology to prevent hot-car deaths went unrealized. Liability risks, expense and the lack of clear regulatory guidelines also discouraged innovation. Congressional attempts in recent years to legislate on this front have failed.

That all may be changing, given the rising number of child deaths. The Hot Cars Act of 2019, introduced in the House by Rep. Tim Ryan (D-Ohio), would require all new passenger cars to be “equipped with a child safety alert system.” The bill mandates a “distinct auditory and visual alert to notify individuals inside and outside the vehicle” when the engine has been turned off and motion by an occupant is detected.

The Hyundai Santa Fe and Kia Telluride already offer such technology, which is a welcome step in the right direction. But it would not identify infants who have fallen asleep and lie motionless; these detectors are not typically sensitive enough to detect the rise and fall of a child’s chest during breathing.

The Senate version of the hot cars bill proposes an alert to the driver, when the engine is turned off, if the back door had earlier been opened, offering a reminder that a child may have been placed in a car seat.

The development of sensors for autonomous-vehicle technology is promising — how much harder will it be to alert drivers to people’s presence inside the car, not outside? Other ideas to consider: A back-seat version of the passenger-seat weight sensor that cues seat-belt use, with a lower weight threshold to alert the driver (and loud enough for a passerby to hear) once the engine is shut off. Or try something that doesn’t rely on motion or weight — a carbon-dioxide detector that would sense rising levels (we exhale carbon dioxide, and this rises in a closed and confined space) after the engine is off, sounding an alarm while automatically cooling the vehicle.

No parent of a young child is immune to Forgotten Baby Syndrome — we are all capable of becoming distracted, with terrible consequences. Those who have been devastated by such a loss deserve our sympathy, not our scorn. To avoid future such tragedies, applying technical innovation to passenger vehicles is essential.

**Originally published in the Washington Post**

AI Could Predict Death. But What If the Algorithm Is Biased?

AI Could Predict Death. But What If the Algorithm Is Biased?

Researchers are studying how artificial intelligence could predict risks of premature death. But the health care industry needs to consider another risk: unconscious bias in AI.

 

Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.

The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.

There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the “neural inputs” embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.

Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor’s gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)

In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.

When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.

Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.

We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it’s an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they’re trained on.

Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.

Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.

 

**Originally published in Wired**

Innovation

lightbulb in sand