Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

Talking to Your Child’s Doctor About Alternative Medicine

Talking to Your Child’s Doctor About Alternative Medicine
[by Drs. Amitha Kalaichandran; Roger Zemek; Sunita Vohra]

A few months ago, the Centers for Disease Control and Prevention published a report about a young boy from Connecticut who developed lead poisoning as a direct result of his parents giving him a magnetic healing bracelet for teething. It seems every few months a story will cover a tragic case of a parent choosing an unconventional medical treatment that causes harm.

More often, the alternative treatments parents choose pose little risk to their kids — anything from massage therapy to mind-body therapies like mindfulness meditation and guided imagery. Research indicates that overall, there are few serious adverse events related to using alternative therapies. But when they do occur, they can be catastrophic, in some cases because caregivers or alternative care providers are poorly informed on how to recognize the signs of serious illness.

The National Center for Complementary and Integrative Health, part of the National Institutes of Health, now refers to these alternative treatments as complementary health approaches, or C.H.A. They are defined as “a group of diverse medical and health care systems, practices and products not presently considered to be part of conventional Western medicine.” In some cases they complement traditional care. In others they are used in place of standard medical practices.

It’s a polarizing subject that unfortunately gets muddled with conversations about anti-vaccination. But while some anti-vaxxers use complementary health approaches, people who use C.H.A. don’t necessarily doubt vaccine effectiveness.

What’s less clear is the proportion of parents choosing complementary health approaches for their children, for what conditions, and their perceptions of effectiveness. We also know very little about parents’ willingness to discuss their use with their child’s doctor, and most doctors receive little training in C.H.A. use, especially in children, and how to counsel parents about it.

To explore these questions, we surveyed parents in a busy emergency room in eastern Ontario, Canada. As reported in our recent study, just over 60 percent said they gave their child a C.H.A. within the last year. Vitamins and minerals (59 percent) were the most common ingested treatment, and half the parents used massage. Our research found that parents with a university-level education were more likely to use a complementary treatment than those with less education.

Parents also perceived most of the C.H.A. that they used — from vitamins and minerals to aromatherapy to massage — as effective. However, less than half of parents felt that homeopathy or special jewelry would be helpful.

As reported in our recent paper, we then asked parents if they had tried a complementary therapy for the problem at hand before they came to the emergency room. Just under one-third reported using C.H.A. for a specific condition, most often for gastrointestinal complaints. Interestingly, in the case of emergency care, there was no correlation with the parents’ level of education.

In work we previously presented at the International Congress of Pediatrics, we asked these parents whether they believed their provider — a nurse practitioner or a doctor — was knowledgeable about complementary medicine. About 70 percent believed their health provider was knowledgeable about C.H.A., although this perception was less likely among parents with a university-level education. Surprisingly, 88 percent said they felt comfortable discussing their use of C.H.A. with their medical provider.

Previous reports have found that only between 40 percent and 76 percent actually disclose C.H.A. use with their doctor. In our study, we were talking to parents who had brought their child to an emergency room, where they would be more likely to talk about whatever treatments they had tried. In many cases, parents may refrain from even taking their child to the doctor if their problem is not a serious one. So it is likely that the overall proportion of parents who use C.H.A. for their children is an underestimate.

Our findings underscore the need for parents and their child’s health providers to have more open conversations about what they are giving to their child for health reasons.

Medical providers also need to be actively asking whether C.H.A. is used and stay up-to-date on current evidence about complementary therapies, including potential interactions with any medications they may also be taking. Much of this information is summarized on the N.C.C.I.H. website.

Here are some ways parents can approach the issue of alternative therapies with their doctors:

■ Write down everything your child is using as though it’s a medication. Include any special diets, teas and visits to other complementary medicine providers.

■ Keep track of any positive and negative results from C.H.A. that you notice —- including no effect — and the cost involved

■ If your child’s health provider doesn’t ask about C.H.A., start the conversation.

Physicians and other medical providers should:

■ Learn more about these treatments and the evidence behind them. The N.C.C.I.H. is a good place to start.

■ Try not to be judgmental; causing a rift with a parent because you might not agree with their choices may cause a breakdown in the therapeutic relationship.

■ Evaluate risks and benefits, and be aware of what is unknown about the specific C.H.A. being used. Make efforts to learn more about the therapy and take action if there are clear side effects and risks, documenting the discussion where appropriate.

Parents and doctors are on the same team when it comes to caring for a child’s health. Taking time to explore what parents and children are using, including any therapies that lie outside the scope of conventional medical practice, provides an opportunity to have open and honest discussions about risk, benefits and safety around complementary health approaches.

**Originally published in the New York Times**

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Stanford Graduate School of Business researcher Michal Kosinski set out to answer the latter question in a controversial new study. Using a deep-learning algorithm, Kosinski and his colleagues inputted thousands of photos of white Americans who self-identified as either gay or straight, and tagged them accordingly. The software then learned physical commonalities — micro quantitative differences based on facial measurements — to distinguish gay from straight features.

His team found that the computer had astonishingly accurate “gaydar,” though it was slightly better at identifying gay men (81 percent accuracy) than lesbians (74 percent accuracy). Notably, the software outperformed human judges in the study by a wide margin.

Kosinski’s work was based on previous but controversial research that suggests that the hormonal balance in the womb influences sexual orientation as well as appearance. “Data suggests that [certain groups of] people share some facial characteristics that are so subtle as to be imperceptible to the human eye,” Kosinski says. The study, according to Kosinski, merely tested that theory using a respected algorithm developed by Oxford Vision Lab.

Predictably, rights groups, including GLAAD and Human Rights Campaign, were outraged by Kosinski’s study, simultaneously questioning his methods while suggesting that his program was a threat to members of the gay community.

Kosinski is known as both a researcher and a provocateur. He says that one of the goals for the study was to warn us of the dangers of artificial intelligence. He designed his research, he says, to goad us into taking privacy issues around machine learning more seriously. Could AI “out” people in any number of ways, making them targets of discrimination?

But for the sake of argument, let’s suppose that facial-recognition technology will keep improving, and that machines may someday be able to quickly detect a variety of characteristics — from homosexuality to autism — that the unaided human eye cannot. What would it mean for society if highly personal aspects of our lives were written on our faces?

I remember the first time I saw a baby with the condition, which appears in patients who have a third copy of chromosome 21, instead of the usual pair. The infant was born in a community hospital to a mother who had declined genetic screening. As he lay in his cot a few hours after birth, his up-slanted “palpebral fissures” (eyelid openings) and “short philtrum” (groove in the upper lip), among many other things, seemed subtle. It only took a glance from my attending, an experienced pediatrician, to know that the diagnosis was likely. (Later on, a test called a karyotype confirmed the presence of an extra chromosome.)

Could AI someday replace a professional human diagnostician? Just by looking at a subject, Angela Lin, a medical geneticist at Massachusetts General Hospital, can discern a craniofacial syndrome with a high degree of accuracy. She also uses objective methods — measuring the distance between eyes, lips, and nose, for example — for diagnostic confirmation. But this multifaceted technique is not always perfect. That’s why she believes facial recognition software could be useful in her work.

Lin stresses that facial recognition technology is just one of many diagnostic tools, and that in most cases it’s not a substitute for a trained clinical eye. She also worries about how widespread use of facial recognition software could be problematic: “The main barrier for me is privacy concerns. . . we want to be sure the initial image of the person is deleted.”

Autism, for one, may involve physical characteristics too subtle for the human eye to detect. A few months ago, an Australian group published a study that used facial-recognition technology to discern the likelihood of autism using 3-D images of children with and without the condition. As in Kosinski’s study, the computer “learned” the facial commonalities of those with autism and successfully used them as a predictive tool.

The lead study author, Diana Tan, a PhD candidate at University of Western Australia School of Psychological Sciences, warns that the technology has its limitations. A diagnosis of autism requires two distinct elements: identifying social and communication challenges, and behavioral analysis of repetitive behaviors and restrictive interests.

Some scientists believe the social-communication difficulties may be linked to elevated prenatal testosterone — known as the “extreme male brain” theory of autism. Facial masculinization may result from this excessive testosterone exposure, and the computer algorithm was good at picking it up, which could explain its ability to predict autism through a photo alone.

The facial recognition technology was less successful in tracking traits related to severity: that is, repetitive behaviors and restrictive interests. While the computer successfully identified children with autism whose behaviors were marked by lack of empathy, sensitivity, and other typically male traits (i.e. social-communication issues), it was less successful in diagnosing the children who predominantly exhibited restrictive and repetitive behaviors. This suggests that the latter aspects may not be related to hormone exposure and the its related physical changes.

“While [the study] supports the ‘hypermasculine brain theory’ of autism,” Tan says, “it’s not a perfect correlation.”

“In my view,” she says, “[our technique] should be complementary to existing behavioral and development assessments done by a trained doctor, and perhaps one day it could be done much earlier to help evaluate risk,” adding that 3-D prenatal ultrasounds may potentially provide additional data, allowing autism risk to be predicted before birth.

Regardless of the technology’s apparent shortcomings, companies have been quick to leverage big data and facial-recognition capabilities to assist diagnosticians. Boston-based FDNA has been developing technology for use in clinical settings over the last five years and released a mobile app for professionals called Face2Gene in 2014. In principle, it’s similar to the facial recognition software used in Tan’s and Kosinski’s studies, but — more than just study pure science — it’s intended to do what doctors like Lin spend decades learning: make diagnoses of genetic conditions based on facial characteristics.

Last year, the company teamed up on a study to use the app to help with autism diagnoses. The work has not yet been validated in the clinical setting, but it is already gaining adherents.

“We have over 10,000 doctors and geneticists in 120 countries using the technology,” says Jeffrey Daniels, FDNA’s marketing director. “As more people use it, the database expands, which improves its accuracy. And in cases where doctors input additional data” — for instance, information about short stature or cognitive delay, which often helps narrow down a diagnosis — “we can reach up to 88 percent diagnostic accuracy for some conditions.”

Apple, Amazon, and Google have all teamed up with the medical community to try to develop a host of diagnostic tools using the technology. At some point, these companies may know more about your health than you do. Questions abound: Who owns this information, and how will it be used?

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

As the media frenzy around Kosinski’s work has died down over the last few weeks, he is gearing up next to explore whether the same technology can predict political preferences based on facial characteristics. But wouldn’t this just aggravate concerns about discrimination and privacy violations?

“I don’t think so,” he says. “This is the same argument made against our other study.” He then reveals his true goal: “In the long term, instead of fighting technology, which is just providing us with more accurate information, we need solutions to the consequences of having that information. . . like more tolerance and more equality in society,” he says. “The sooner we get down to fixing those things, the better we’ll be able to protect people from privacy or discrimination issues.”

In other words, instead of raging against the facial-recognition machines, we might try to sort through our inherent human biases instead. That’s a much more complex problem that no known algorithm can solve.

**Originally published in the Boston Globe**

Could a VR walk in the woods relieve chronic pain?

Could a VR walk in the woods relieve chronic pain?

When pain researcher Diane Gromala recounts how she started in the field of virtual reality, she seems reflective.

She had been researching virtual reality for pain since the early 1990s, but her shift to focusing on how virtual reality could be used for chronic pain management began in 1999, when her own chronic pain became worse. Prior to that, her focus was on VR as entertainment.

Gromala, 56, was diagnosed with chronic pain in 1984, but the left-sided pain that extended from her lower stomach to her left leg worsened over the next 15 years.

“Taking care of my chronic pain became a full-time job. So at some point I had to make a choice — either stop working or charge full force ahead by making it a motivation for my research. You can guess what I chose,” she said.

Now she’s finding that immersive VR technology may offer another option for chronic pain, which affects at least one in five Canadians, according to a 2011 University of Alberta study.

“We know that there is some evidence supporting immersive VR for acute pain, so it’s reasonable to look into how it could help patients that suffer from chronic pain.”

Gromala has a PhD in human computer interaction and holds the Canada Research Chair in Computational Technologies for Transforming Pain. She also directs the pain studies lab and the Chronic Pain Research Institute at Simon Fraser University in Burnaby, B.C.

Using VR to relieve or treat acute pain has been done for a while.

In the 1990s, researcher Hunter Hoffman conducted one of the earliest studies looking at VR for pain relief in the University of Wisconsin human interface technology lab. His initial focus was burn victims.

Movement and exercise

Since then, the field has expanded. Gromala’s lab focuses on bringing evidence-based therapies that work specifically for chronic pain, such as mindfulness-based stress reduction. They have published studies on their virtual meditative walk to guide and relax patients.

Movement and exercise are a key part of chronic pain management in general. But for many patients, it can be too difficult.

“Through VR we can help create an environment where, with a VR headset, they can feel like they are walking through a forest, all while hearing a guided walking meditation,” Gromala said.

The team also designed a meditation chamber — where a person lies in the enclosed space, breathing becomes more relaxed and a jellyfish viewed through VR dissolves.

Each experiment gives real-time feedback to the patient through objective measures of pain such as skin temperature and heart rate. For instance, while feeling pain, skin surface temperature and heart rate can increase.

While pain medications can be important, chronic pain treatment should also address lifestyle aspects, says Neil Jamensky, a Toronto anesthesiologist and chronic pain specialist.

“Physical rehabilitation therapy, psychological support and optimizing things like nutrition, exercise, sleep and relaxation practices all play key roles in chronic pain management,” he said.

Going global

Other researchers like Sweden’s Dr. Max Ortiz-Catalan from Chalmers University of Technology have looked at virtual and augmented reality for phantom limb pain — the particularly challenging syndrome among amputees who experience pain in a limb that is not physically there.

In his study, published in The Lancet in December 2016, Ortiz-Catalan demonstrated a 47 per cent reduction in symptoms among VR participants.

He believes the reason behind it is a “retraining” of the brain, where pathways in the brain effectively re-route themselves to focus more on movement, for instance.

“We demonstrated that if an amputee can see and manipulate a ‘virtual’ limb — which is projected over their limb stump — in space, over time, the brain retrains these areas.

“Through this retraining, the brain reorganizes itself to focus on motor control and less on pain firing,” said Ortiz-Catalan.

With only 14 patients, this was a pilot study, but he plans to expand the work into a multi-centre, multi-country study later this year. The University of New Brunswick is one of the planned study sites.

There’s an app for this

Others in the United States have published their own findings of VR for chronic pain.

Last month, Ted Jones and colleagues from Knoxville released results of their pilot study of 30 chronic pain patients who were offered five-minute sessions using a VR application called “Cool!” — an immersive VR program administered through a computer and viewed through a head-mounted device.

All reported a decrease in pain while using the app — some decreased by 60 per cent — and post-session pain decreased by 33 per cent. The findings were presented in the journal PLoS.

“What was interesting to observe was that the pain decreased for six to 48 hours post-VR experience. It’s not as long as we would like, but does illustrate that relief can be sustained over some period of time,” Jones said.

His team will be expanding the research this year and will also look at how VR can help with the challenging mental health side-effects of chronic pain.

Next steps

Jamensky points out while VR could be a promising treatment one day, one challenge with clinical trials is the dependence on looking at pain scores when assessing the effectiveness of VR. This may overshadow individual patient goals.

For instance, while the ability to decrease any individual’s pain score from a “seven out of 10” to a “three out of 10” can be challenging, improving functionality and quality of life can often be more valuable to the patient.

“A pain score may not always be the best way to assess treatment success, since the therapeutic goal may not be to eliminate pain or improve this score, but to ensure better sleep, better mobility, improved mood or even an ability to return to work,” he said.

VR as a technology for chronic pain management is in its infancy. Gromala notes that further research, in addition to standardizing the VR delivery devices, is needed before it becomes a standard of care. And future studies must include practical outcomes.

“It is important to realize that the ‘pain’ of chronic pain may never go away, and that ultimately the individual must learn to deal with the pain so that they can function better,” Jamensky said.

Gromala agrees.

For her, developing an awareness for how sleep, mood and exercise affect her own pain experience has made a huge difference.

In fact, it has motivated her to continue both advocating for chronic pain patients and to partner with clinical pain specialists on research.

” ‘Taking care of yourself’ means a different thing for chronic pain sufferers. It’s much tougher,” Gromala said.

“So as researchers we have a big task ahead of us, and sometimes it means exploring whether out-of-the-box methods like VR can help.”

**Originally published on CBC.ca**

For the sake of doctors and patients, we must fix hospital culture

For the sake of doctors and patients, we must fix hospital culture

When hospitals fail to create a culture where doctors and nurses can speak up patients pay the price
By: Blair Bigham and Amitha Kalaichandran.

It seems too often that reporters—not doctors—sound the alarm when systemic problems plague hospitals, where whispers in the shadows indicate widespread concerns, but individuals feel unable to speak up. Recently, reports surfaced that children were dying after surgery at the University of North Carolina at higher than expected rates, despite warnings from doctors about the department’s performance. And whether in Australia, the United Kingdom, Canada, or the United States, reports show that bullying is alive and well.

This pervasive culture—where consultant doctors, residents, and other hospital staff feel that they cannot bring up critically important points of view—must change. It shouldn’t take investigative journalism to fix the culture that permits silence and bullying. But it does take all of us to rethink how physicians and leaders work together to improve hospital culture.

Investing in improving hospital culture makes a difference to patient care and the quality of the learning experience.

Recent studies on workplace culture show how important it is. In a new JAMA Surgery study, surgeons who had several reports of “unprofessional behaviour” (defined as bullying, aggression, and giving false or misleading information) had patient complication rates about 40% higher than surgeons who had none. Domains of professionalism include competence, communication, responsibility, and integrity. Last year, hospital culture was directly linked to patient outcomes in a major study led by Yale School of Public Health scientist Leslie Curry. Risk-standardized mortality rates after a heart attack were higher in hospitals that had a culture that was less collaborative and open.

Curry’s team created a programme to improve hospital culture, namely by enhancing psychological safety—a term that signifies a willingness of caregivers to speak freely about their concerns and ideas. When hospital culture changed for the better, heart attack outcomes drastically improved and death rates fell.

There are examples of good practice where psychological safety and transparency are valued, and these centres often boast better patient outcomes. A recent systematic review of sixty-two studies for instance found fewer deaths, fewer falls, and fewer hospital-acquired infections in healthcare settings that had healthier cultures.

The impact of healthcare workplace culture doesn’t just end with patient safety. Physician retention, as well as job satisfaction and teamwork, all benefit from a strong organizational culture in hospitals. This is crucial at a time where burnout in medicine is high. Hospitals can also learn from the tech industry which discovered early on that psychological safety is key to innovation. In other words, those who are afraid of failing tend not to suggest the bold ideas that lead to great progress.

So how can hospitals make improvements to their culture?

The first thing is to shine a light on the culture by measuring it. Staff surveys and on-site observations can illuminate negative workplace cultures so that boards and executives can consider culture scores in the same regard as wait-times and revenue. Regulators and accreditors could incorporate workplace culture indicators in their frameworks to increase accountability. We recently saw this in Sydney in Australia, where a third residency programme lost its accreditation due to bullying of junior doctors.

The second is to hire talented leaders not based just on their clinical competence, but also on their ability to foster inclusiveness, integrity, empathy, and the ability to inspire. By setting the “tone at the top,” leaders can influence the “mood in the middle,” and chip away at ingrained attitudes that tolerate, or even support, bullying, secrecy, and fear of speaking out.

Another solution rejects the hierarchy historically found between doctors, nurses and patients, and embraces diversity and inclusion. Effective collaboration helps shift the tribe-versus-tribe attitudes towards a team mindset. Part of this involves amplifying ideas from voices that are traditionally not heard: those of women, the disabled, and ethnic and sexual minorities. As well, leadership must change to be more diverse and inclusive, to reflect the patient population.

The field of medicine attracts motivated, intelligent, and caring people. But being a good caregiver and being a good leader are very different, and training in the latter is sadly lacking.

For every investigative report that uncovers a hospital’s culture of silence—whether it’s unacceptable bullying, unusual death rates, or pervasive secrecy—there are surely hundreds more left uncovered. The fix to this global epidemic requires deep self-reflection and a firm commitment to choose leaders who promote transparency and openness. Implicit in the physicians’ vow “to do no harm” is the vow not to stay silent as that too can be harmful. We must first and foremost create cultures that ensure we feel safe to speak up when things aren’t right. Our patients’ lives— and those of our colleagues—depend on it.

**Originally published in the BMJ**

Preventing children from dying in hot cars

One of the biggest lessons I learned a decade ago in public-health graduate school was that education was rarely enough, on its own, to fundamentally change behavior. Educating the public about health was “necessary but not sufficient,” as one of my epidemiology professors had put it. Weight loss, smoking cessation, safe sexual practices — education campaigns weren’t enough.

Decades of educating the public about the dangers of leaving children unattended in cars where the temperature can turn deadly — even on a sunny but not especially hot day — clearly have not been sufficient. The deaths of 11-month-old twins on July 26 in a hot car in the Bronx have brought a fresh sense of urgency to finding innovative technology solutions.

But even before that tragedy, bills had been introduced in Congress earlier this year to address the rising incidence of young children dying in overheated cars.

According to the No Heat Stroke organization, which tracks pediatric heatstroke deaths in vehicles, the average number of such deaths annually since 1998 is 38, with 53 deaths recorded last year — the most ever. Sadly, the nation appears certain to set a record in 2019, with 32 deaths already by the second week of August. The Kids and Cars safety group, another tracker, notes that “over 900 children have died in hot cars nationwide since 1990.”

Fifty-four percent of these victims are 1 year old or younger. In a little more than half of the deaths, children have been mistakenly left alone by their caregiver, in what is known as Forgotten Baby Syndrome. Other children die after climbing into hot cars without an adult’s knowledge, and others have been knowingly, sometimes criminally, left in hot cars.

The American Academy of Pediatrics recommends rear-facing seats for crash-safety reasons and last year removed the age recommendation, focusing instead on height and weight. But there is an immense irony in the safety policy: Rear-facing seats prevent the driver from occasionally making eye contact with the child in the rearview mirror, which would keep the child prominent in the adult’s mind. And when a rear-facing seat is often left in the car, regardless of whether a child is in it, the seat’s presence can be too easily taken for granted.

The father in the New York case said he had accidentally left the twins in rear-facing car seats. (A judge on Aug. 1 paused the pursuit of a criminal case against the twins’ father, pending the results of an investigation.)

As a pediatrics resident physician, I’ve seen hundreds of parents and caregivers of young children, and many are simply overwhelmed, sleep-deprived and vulnerable to making tragic errors. Some parents in high-stress professions may have an additional cognitive load, which can lead to distractions.

The American Academy of Pediatrics suggests several ways to help prevent these tragedies by retraining habits and breaking away from the autopilot mode that often sets in while driving and doing errands. But that’s not enough. The Post noted five years ago that automakers’ promises to use technology to prevent hot-car deaths went unrealized. Liability risks, expense and the lack of clear regulatory guidelines also discouraged innovation. Congressional attempts in recent years to legislate on this front have failed.

That all may be changing, given the rising number of child deaths. The Hot Cars Act of 2019, introduced in the House by Rep. Tim Ryan (D-Ohio), would require all new passenger cars to be “equipped with a child safety alert system.” The bill mandates a “distinct auditory and visual alert to notify individuals inside and outside the vehicle” when the engine has been turned off and motion by an occupant is detected.

The Hyundai Santa Fe and Kia Telluride already offer such technology, which is a welcome step in the right direction. But it would not identify infants who have fallen asleep and lie motionless; these detectors are not typically sensitive enough to detect the rise and fall of a child’s chest during breathing.

The Senate version of the hot cars bill proposes an alert to the driver, when the engine is turned off, if the back door had earlier been opened, offering a reminder that a child may have been placed in a car seat.

The development of sensors for autonomous-vehicle technology is promising — how much harder will it be to alert drivers to people’s presence inside the car, not outside? Other ideas to consider: A back-seat version of the passenger-seat weight sensor that cues seat-belt use, with a lower weight threshold to alert the driver (and loud enough for a passerby to hear) once the engine is shut off. Or try something that doesn’t rely on motion or weight — a carbon-dioxide detector that would sense rising levels (we exhale carbon dioxide, and this rises in a closed and confined space) after the engine is off, sounding an alarm while automatically cooling the vehicle.

No parent of a young child is immune to Forgotten Baby Syndrome — we are all capable of becoming distracted, with terrible consequences. Those who have been devastated by such a loss deserve our sympathy, not our scorn. To avoid future such tragedies, applying technical innovation to passenger vehicles is essential.

**Originally published in the Washington Post**

AI Could Predict Death. But What If the Algorithm Is Biased?

AI Could Predict Death. But What If the Algorithm Is Biased?

Researchers are studying how artificial intelligence could predict risks of premature death. But the health care industry needs to consider another risk: unconscious bias in AI.

 

Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.

The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.

There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the “neural inputs” embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.

Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor’s gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)

In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.

When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.

Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.

We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it’s an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they’re trained on.

Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.

Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.

 

**Originally published in Wired**

Innovation

lightbulb in sand