Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

Worried About That New Medical Study? Read This First.

Worried About That New Medical Study? Read This First.

There’s more than meets the eye — here are some tips to help avoid confusion.

In August 2019, JAMA Pediatrics, a widely respected journal, published a study with a contentious result: Pregnant women in Canada who were exposed to increasing levels of fluoride (such as from drinking water) were more likely to have children with lower I.Q. Some media outlets ran overblown headlines, claiming that fluoride exposure actually lowers I.Q. And while academics and journalists quickly pointed out the study’s many flaws — that it didn’t prove cause and effect; and showed a drop in I.Q. only in boys, not girls — the damage was done. People took to social media, voicing their concerns about the potential harms of fluoride exposure.

We place immense trust in scientific studies, as well as in the journalists who report on them. But deciding whether a study warrants changing the way we live our lives is challenging. Is that extra hour of screen time really devastating? Does feeding processed meat to children increase their risk of cancer?

As a physician and a medical journalist with training in biostatistics and epidemiology, I sought advice from several experts about how parents can gauge the quality of research studies they read about. Here are eight tips to remember the next time you see a story about a scientific study.

1. Wet pavement doesn’t cause rain.
Put another way, correlation does not equal causation. This is one of the most common traps that health journalists fall into with studies that have found associations between two things — like that people who drink coffee live longer lives — but which haven’t definitively shown that one thing (coffee drinking) causes another (a longer life). These types of studies are typically referred to as observational studies.

When designing and analyzing studies, experts must have satisfactory answers to several questions before determining cause and effect, said Elizabeth Platz, Sc.D., a professor of epidemiology and deputy chair of the department of epidemiology at the Johns Hopkins Bloomberg School of Public Health. In smoking and lung cancer studies, for example, researchers needed to show that the chemicals in cigarettes affected lung tissue in ways that resulted in lung cancer, and that those changes came after the exposure. They also needed to show that those results were reproducible. In many studies, cause and effect isn’t proven after many years, or even decades, of study.

2. Mice aren’t men.
Large human clinical studies are expensive, cumbersome and potentially dangerous to humans. This is why researchers often turn to mice or other animals with human-like physiologies (like flies, worms, rats, dogs and monkeys) first.

If you spot a headline that seems way overblown, like that aspirin thwarts bowel cancer in mice, it’s potentially notable, but could take years or even decades (if ever) to test and see the same findings in humans.

3. Study quality matters.
When it comes to study design, not all are created equal. In medicine, randomized clinical trials and systematic reviews are kings. In a randomized clinical trial, researchers typically split people into at least two groups: one that receives or does the thing the study researchers are testing, like a new drug or daily exercise; and another that receives either the current standard of care (like a statin for high cholesterol) or a placebo. To decrease bias, the participant and researcher ideally won’t know which group each participant is in.

Systematic reviews are similarly useful, in that researchers gather anywhere from five to more than 100 randomized controlled trials on a given subject and comb through them, looking for patterns and consistency among their conclusions. These types of studies are important because they help to show potential consensus in a given body of evidence.

Other types of studies, which aren’t as rigorous as the above, include: cohort studies (which follow large groups of people over time to look for the development of disease), case-control studies (which first identify the disease, like cancer, and then trace back in time to figure out what might have caused it) and cross-sectional studies (which are usually surveys that try to identify how a disease and exposure might have been correlated with each other, but not which caused the other).

Next on the quality spectrum come case reports (which describe what happened to a single patient) and case series (a group of case reports), which are both lowest in quality, but which often inspire higher quality studies.

4. Statistics can be misinterpreted.
Statistical significance is one of the most common things that confuses the lay reader. When a study or a journalistic publication says that a study’s finding was “statistically significant,” it means that the results were unlikely to have happened by chance.

But a result that is statistically significant may not be clinically significant, meaning it likely won’t change your day-to-day. Imagine a randomized controlled trial that split 200 women with migraines into two groups of 100. One was given a pill to prevent migraines and another was given a placebo. After six months, 11 women from the pill group and 12 from the placebo group had at least one migraine per week, but the 11 women in the pill group experienced arm tingling as a potential side effect. If women in the pill group were found to be statistically less likely to have migraines than those in the placebo group, the difference may still be too small to recommend the pill for migraines, since just one woman out of 100 had fewer migraines. Also, researchers would have to take potential side effects into account.

The opposite is also true. If a study reports that regular exercise helped relieve chronic pain symptoms in 30 percent of its participants, that might sound like a lot. But if the study included just 10 people, that’s only three people helped. This finding may not be statistically significant, but could be clinically important, since there are limited treatment options for people with chronic pain, and might warrant a larger trial.

5. Bigger is often better.
Scientists arguably can never fully know the truth about a given topic, but they can get close. And one way of doing that is to design a study that has high power.

“Power is telling us what the chances are that a study will detect a signal, if that signal does exist,” John Ioannidis, M.D., a professor of medicine and health research and policy at Stanford Medical School said via email.

The easiest way for researchers to increase a study’s power is to increase its size. A trial of 1,000 people typically has higher power than a trial of 500, and so on. Simply put, larger studies are more likely to help us get closer to the truth than smaller ones.

6. Not all findings apply to you.
If a news article reports that a high-quality study had statistical and clinical significance, the next step might be to determine whether the findings apply to you.

If researchers are testing a hypothetical new drug to relieve arthritis symptoms, they may only include participants who have arthritis and no other conditions. They may eliminate those who take medications that might interfere with the drug they’re studying. Researchers may recruit participants by age, gender or ethnicity. Early studies on heart disease, for instance, were performed primarily on white men.

Each of us is unique, genetically and environmentally, and our lives aren’t highly controlled like a study. So take each study for what it is: information. Over time, it will become clearer whether one conclusion was important enough to change clinical recommendations. Which gets to a related idea …

7. One study is just one study.
If findings from one study were enough to change medical practices and public policies, doctors would be practicing yo-yo medicine, where recommendations would change from day to day. That doesn’t typically happen, so when you see a headline that begins or ends with, “a study found,” it’s best to remember that one study isn’t likely to shift an entire course of medical practice. If a study is done well and has been replicated, it’s certainly possible that it may change medical guidelines down the line. If the topic is relevant to you or your family, it’s worth asking your doctor whether the findings are strong enough to suggest that you make different health choices.

8. Not all journals are created equal.
Legitimate scientific journals tend to publish studies that have been rigorously and objectively peer reviewed, which is the gold standard for scientific research and publishing. A good way to spot a high quality journal is to look for one with a high impact factor — a number that primarily reflects how often the average article from a given journal has been cited by other articles in a given year. (Keep in mind, however, that lower impact journals can still publish quality findings.) Most studies published on PubMed, a database of published scientific research articles and book chapters, are peer-reviewed.

Then there are so-called ‘predatory’ journals, which aren’t produced by legitimate publishers and which will publish almost any study — whether it’s been peer-reviewed or not — in exchange for a fee. (Legitimate journals may also request fees, primarily to cover their costs or to publish a study in front of a paywall, but only if the paper is accepted.) Predatory journals are attractive to some researchers who may feel pressure to ‘publish or perish.’ It’s challenging, however, to distinguish them from legitimate ones, because they often sound or look similar. If an article has grammatical errors and distorted images, or if its journal lacks a clear editorial board and physical address, it might be a predatory journal. But it’s not always obvious and even experienced researchers are occasionally fooled.

Reading about a study can be enlightening and engaging, but very few studies are profound enough to base changes to your daily life. When you see the next dramatic headline, read the story — and if you can find it, read the study, too (PubMed or Google Scholar are good places to start). If you have time, discuss the study with your doctor and see if any reputable organizations like the Centers for Disease Control and Prevention, World Health Organization, American Academy of Pediatrics, American College of Cardiology or National Cancer Institute have commented on the matter.

Medicine is not an exact science, and things change every day. In a field of gray, where headlines sometimes try to force us to see things in black-and-white, start with these tips to guide your curiosity. And hopefully, they’ll help you decide when — and when not to — make certain health and lifestyle choices for yourself and for your family.

**Originally published in the New York Times**

Soon you’ll be able to easily screen your brain for abnormalities—but should you?

Soon you’ll be able to easily screen your brain for abnormalities—but should you?

Earlier this month, in a private imaging clinic in the Ginza district of downtown Tokyo, I lay patiently as the MRI machine buzzed and rattled. I wasn’t there at the request of a doctor, but to screen my brain using a machine learning tool called EIRL, which is named after the Nordic goddess Eir. It’s the latest technology, focused on detecting brain aneurysms, from Tokyo-based LPixel, one of Japan’s largest companies working on artificial intelligence for healthcare. Brain aneurysms occur when a blood vessel swells up like a balloon. If it bursts, it can be deadly.

After the MRI, the images get uploaded onto a secure cloud, and EIRL begins its analysis looking for abnormalities. Each scan is then checked by a radiologist followed by a neurosurgeon. The final report, with the images, is produced within 10 days and accessible through a secure portal.

While LPixel offers a number of other A.I. tools to assist with CAT scans, X-rays, real-time colonoscopy images, and research image analysis, the EIRL for brain aneurysm detection remains their most advanced offering. The EIRL algorithm was built upon data extracted from over 1,000 images with confirmed brain aneurysms, in partnership with four Japanese universities, including the University of Tokyo and Osaka City University. Data from a 2019 study by LPixel and their partner universities found EIRL for brain aneurysms had a high sensitivity of between 91 and 93% (sensitivity refers to the likelihood of detecting an aneurysm if one is indeed present).

Mariko Takahashi, project manager with LPixel, explains that EIRL differs from computer-assisted devices in that there is a learning component: “EIRL becomes more accurate the more it’s used,” she says. According to Takahashi, EIRL has detected cases of aneurysms that require immediate medical attention, even though the patients displayed no symptoms.

The EIRL for brain aneurysms algorithm was approved by the Japanese Pharmaceutical and Medical Devices Agency (PMDA) in the category of software as a medical device in Japan in September. The algorithm is based entirely on Japanese patients, but it could be generalized to other populations, says Takahashi, though she notes that their group is looking into studies showing that the Japanese anatomy of brain vessels may vary slightly from other ethnic groups and whether the algorithm would therefore need to be validated in other populations.

EIRL does have competitors. A Korean startup called Deepnoid is developing a brain aneurysm detection tool using MRI. Also, GE Healthcare is using brain CT to detect aneurysms. Lastly, Stanford is positioning itself to use deep learning in brain CTs to detect brain aneurysms, though it appears to be intended for diagnosis, not screening. Competitors in Belgium and China as well are using AI to detect brain tumors.

LPixel hopes to have FDA approval for EIRL in the U.S. in 2020 and is working to ensure it meets HIPPA compliance regulations and privacy and security.
But just because you might soon be able to get AI-assisted screening for your brain, should you?

It’s a complicated and very personal question. In the U.S. and Canada, there is a push to reduce unnecessary testing, which includes limiting screening tests to those that are inexpensive and have been shown to reduce the likelihood of disease, such as breast cancer and colon cancer. Currently in the U.S., Canada, and U.K., there is no recommended population-wide screening program for brain aneurysms, and the American College of Radiology recommends that head and neck MRIs be limited to situations where there are symptoms suggesting a pathology such as a tumor, or for cases where there may be brain metastasis of another cancer (such as breast cancer).

There are dangers to overscreening, particularly when it comes to the brain: for one, the possibility of unnecessary and invasive testing. In essence: when you go hunting for abnormalities in the brain, you might find things you didn’t expect to uncover—for example, an “incidentaloma,” which is a lesion that isn’t necessarily harmful or may just be a normal variation in human anatomy. These can occur in up to one-third of healthy patients. The harm involved in investigating these, such as the risk of infection when obtaining a sample, can outweigh the benefits.

However, those who are at high risk of aneurysms, such as those with a family history, may warrant screening. Notably, in Japan, brain aneurysms are more common compared to other populations, an issue that may also be muddied by the fact that more people choose to be screened for it. They may also be more likely to rupture. And MRI screening in Japan is less expensive: roughly $200-$300 for a head MRI, which is around 50-75% less than in North America.

Dr. Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, shares these sentiments. “There’s no question AI will help accuracy of brain image interpretation (meaning the fusion of machine and neuroradiologist, complementary expertise) but there are drawbacks such as the lack of prospective studies in the real clinical world environment; potential for algorithmic malware and glitches, and many more, which I reviewed in the ‘Deep Liabilities’ chapter of my book,” Topol says. “Personally I do not see the benefit to using AI technology for ‘screening’ of brain aneurysms at this time, as there’s no data or evidence to support the benefit, at least in patients without relevant symptoms.”

That said, if the algorithm is validated for populations outside Japan, there could be potential in diagnostic situations, for instance in hospitals as opposed to private clinics, as well as for high-risk individuals who need screening. And that’s where the company seems to be headed.

“Right now we’re exploring how to best roll out technology in hospitals in Japan, in collaboration with our partners,” Takahashi says.

As for me, I received my results about 9 days later, and—assuming the translation from Japanese to English was accurate—according to EIRL, there were no abnormalities.

**Originally published in Fast Company**

Addressing the curiosity decline in medicine

Addressing the curiosity decline in medicine

“So, if we’re worried about viral myocarditis, would the patient have similar symptoms as someone with pericarditis?” The astute medical student slipped me his question as we hurriedly made our way across the ward to the next patient’s room.

He had wondered whether inflammation of the heart muscle (as in myocarditis) presents like inflammation of the protective layer around the heart (the pericardium). Classically we are taught that pericarditis-type chest pain is better when sitting up (because the protective layer is kept away from the nerves that transmit pain) compared with lying down or when taking deep breaths.

“Well there is some overlap in clinical signs,” I began. But we were already on to the next patient, and so my attention was redirected. The student had looked eager to hear my response, but that expression quickly slipped away.

These missed opportunities, to explore and address complex questions, are frequent in medical education, and the downstream consequences of not fostering this curiosity are significant.

Curiosity is the necessary fuel to rethink one’s own biases, and it can reap dividends for patient care. When doctors think about a set of symptoms separately, they may reach different conclusions; for example one study found that up to 21% of second opinions differ from the original diagnosis.

Allowing doctors to express their curiosity is crucial and it’s time we encourage all medical trainees to be curious.

The decline in curiosity could be caused, in part, by medical trainees assuming a traditionally passive role in hierarchically organized settings like hospitals, suggests a 2011 paper, coauthored by Ronald Epstein, MD, a professor of family medicine, psychiatry, oncology and medicine at the University of Rochester Medical Center.

“There’s a dynamic tension here. People pursue medicine because they are curious about the human experience and scientific discovery, but early in training they are taught to place things in categories and to pursue certainty,” Epstein told me.

A 2017 McGill University study led by pediatrician Robert Sternzus, MD, took this theme a step further. Sternzus and colleagues surveyed medical students across all four years about two types of curiosity: trait curiosity, which is an inherent tendency to be curious; and state curiosity, defined as the environment in which the trait curiosity can survive. Trait curiosity across all four years was significantly higher than state curiosity. The authors concluded that the medical students’ natural curiosity may not have been supported in their learning environment.

“I had always felt that curiosity was strongly linked to performance in the students I worked with,” Sternzus says. “I also felt, as a learner, that I was at my best when I was most curious. And I certainly could remember periods in my training where that curiosity was suppressed. In our study the trends that we found with regards to curiosity across the years confirmed what I had hypothesized.” Sternzus has since spearheaded a faculty development workshop on promoting curiosity in medical trainees.

So what might be the solution, especially as the move towards competency-based training programs may not reward curiosity, and at a time where companies in places like Silicon Valley — which invest in curious and talented minds — position themselves to be another gatekeeper of health care?

New work led by Jatin Vyas, MD, PhD, an infectious disease physician and researcher who directs the internal medicine residence at Massachusetts General Hospital, offers one idea. His team developed a two-week elective program, called Pathways, which allows an intern to investigate a case where the diagnosis is unknown or the science isn’t quite clear. They then present their findings to a group of up to 80 experienced physicians and trainees.

“What I have found is that many interns and residents have lots of important questions. If our attendings are not in tune with that — and it’s often due to a lack of time or expertise — the residents’ questions are oftentimes never discussed,” Vyas says. “When I was a resident, my mentors helped me articulate these important questions, and I believe this new generation of trainees deserve the same type of stimulation and the Pathways elective is one way to help address this.”

At the end of June, Pathways reached the end of its second year, and Vyas recounts that resident satisfaction, clinical-teacher satisfaction, and patient satisfaction were all high. “Patients have expressed gratitude for having trainees eager to take a fresh look at their case, even though they may not receive a breakthrough answer,” Vyas says.

The job of more experienced clinicians is to nurture curiosity of learners not just for the value it provides for the students, but for the benefits it poses for patients, Faith Fitzgerald, MD, an internist at the University of California Davis, has written. Physicians of the future, and the patients they care for, deserve this.

**Originally published in the Stanford Medicine Scope Blog**

Talking to Your Child’s Doctor About Alternative Medicine

Talking to Your Child’s Doctor About Alternative Medicine
[by Drs. Amitha Kalaichandran; Roger Zemek; Sunita Vohra]

A few months ago, the Centers for Disease Control and Prevention published a report about a young boy from Connecticut who developed lead poisoning as a direct result of his parents giving him a magnetic healing bracelet for teething. It seems every few months a story will cover a tragic case of a parent choosing an unconventional medical treatment that causes harm.

More often, the alternative treatments parents choose pose little risk to their kids — anything from massage therapy to mind-body therapies like mindfulness meditation and guided imagery. Research indicates that overall, there are few serious adverse events related to using alternative therapies. But when they do occur, they can be catastrophic, in some cases because caregivers or alternative care providers are poorly informed on how to recognize the signs of serious illness.

The National Center for Complementary and Integrative Health, part of the National Institutes of Health, now refers to these alternative treatments as complementary health approaches, or C.H.A. They are defined as “a group of diverse medical and health care systems, practices and products not presently considered to be part of conventional Western medicine.” In some cases they complement traditional care. In others they are used in place of standard medical practices.

It’s a polarizing subject that unfortunately gets muddled with conversations about anti-vaccination. But while some anti-vaxxers use complementary health approaches, people who use C.H.A. don’t necessarily doubt vaccine effectiveness.

What’s less clear is the proportion of parents choosing complementary health approaches for their children, for what conditions, and their perceptions of effectiveness. We also know very little about parents’ willingness to discuss their use with their child’s doctor, and most doctors receive little training in C.H.A. use, especially in children, and how to counsel parents about it.

To explore these questions, we surveyed parents in a busy emergency room in eastern Ontario, Canada. As reported in our recent study, just over 60 percent said they gave their child a C.H.A. within the last year. Vitamins and minerals (59 percent) were the most common ingested treatment, and half the parents used massage. Our research found that parents with a university-level education were more likely to use a complementary treatment than those with less education.

Parents also perceived most of the C.H.A. that they used — from vitamins and minerals to aromatherapy to massage — as effective. However, less than half of parents felt that homeopathy or special jewelry would be helpful.

As reported in our recent paper, we then asked parents if they had tried a complementary therapy for the problem at hand before they came to the emergency room. Just under one-third reported using C.H.A. for a specific condition, most often for gastrointestinal complaints. Interestingly, in the case of emergency care, there was no correlation with the parents’ level of education.

In work we previously presented at the International Congress of Pediatrics, we asked these parents whether they believed their provider — a nurse practitioner or a doctor — was knowledgeable about complementary medicine. About 70 percent believed their health provider was knowledgeable about C.H.A., although this perception was less likely among parents with a university-level education. Surprisingly, 88 percent said they felt comfortable discussing their use of C.H.A. with their medical provider.

Previous reports have found that only between 40 percent and 76 percent actually disclose C.H.A. use with their doctor. In our study, we were talking to parents who had brought their child to an emergency room, where they would be more likely to talk about whatever treatments they had tried. In many cases, parents may refrain from even taking their child to the doctor if their problem is not a serious one. So it is likely that the overall proportion of parents who use C.H.A. for their children is an underestimate.

Our findings underscore the need for parents and their child’s health providers to have more open conversations about what they are giving to their child for health reasons.

Medical providers also need to be actively asking whether C.H.A. is used and stay up-to-date on current evidence about complementary therapies, including potential interactions with any medications they may also be taking. Much of this information is summarized on the N.C.C.I.H. website.

Here are some ways parents can approach the issue of alternative therapies with their doctors:

■ Write down everything your child is using as though it’s a medication. Include any special diets, teas and visits to other complementary medicine providers.

■ Keep track of any positive and negative results from C.H.A. that you notice —- including no effect — and the cost involved

■ If your child’s health provider doesn’t ask about C.H.A., start the conversation.

Physicians and other medical providers should:

■ Learn more about these treatments and the evidence behind them. The N.C.C.I.H. is a good place to start.

■ Try not to be judgmental; causing a rift with a parent because you might not agree with their choices may cause a breakdown in the therapeutic relationship.

■ Evaluate risks and benefits, and be aware of what is unknown about the specific C.H.A. being used. Make efforts to learn more about the therapy and take action if there are clear side effects and risks, documenting the discussion where appropriate.

Parents and doctors are on the same team when it comes to caring for a child’s health. Taking time to explore what parents and children are using, including any therapies that lie outside the scope of conventional medical practice, provides an opportunity to have open and honest discussions about risk, benefits and safety around complementary health approaches.

**Originally published in the New York Times**

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Stanford Graduate School of Business researcher Michal Kosinski set out to answer the latter question in a controversial new study. Using a deep-learning algorithm, Kosinski and his colleagues inputted thousands of photos of white Americans who self-identified as either gay or straight, and tagged them accordingly. The software then learned physical commonalities — micro quantitative differences based on facial measurements — to distinguish gay from straight features.

His team found that the computer had astonishingly accurate “gaydar,” though it was slightly better at identifying gay men (81 percent accuracy) than lesbians (74 percent accuracy). Notably, the software outperformed human judges in the study by a wide margin.

Kosinski’s work was based on previous but controversial research that suggests that the hormonal balance in the womb influences sexual orientation as well as appearance. “Data suggests that [certain groups of] people share some facial characteristics that are so subtle as to be imperceptible to the human eye,” Kosinski says. The study, according to Kosinski, merely tested that theory using a respected algorithm developed by Oxford Vision Lab.

Predictably, rights groups, including GLAAD and Human Rights Campaign, were outraged by Kosinski’s study, simultaneously questioning his methods while suggesting that his program was a threat to members of the gay community.

Kosinski is known as both a researcher and a provocateur. He says that one of the goals for the study was to warn us of the dangers of artificial intelligence. He designed his research, he says, to goad us into taking privacy issues around machine learning more seriously. Could AI “out” people in any number of ways, making them targets of discrimination?

But for the sake of argument, let’s suppose that facial-recognition technology will keep improving, and that machines may someday be able to quickly detect a variety of characteristics — from homosexuality to autism — that the unaided human eye cannot. What would it mean for society if highly personal aspects of our lives were written on our faces?

I remember the first time I saw a baby with the condition, which appears in patients who have a third copy of chromosome 21, instead of the usual pair. The infant was born in a community hospital to a mother who had declined genetic screening. As he lay in his cot a few hours after birth, his up-slanted “palpebral fissures” (eyelid openings) and “short philtrum” (groove in the upper lip), among many other things, seemed subtle. It only took a glance from my attending, an experienced pediatrician, to know that the diagnosis was likely. (Later on, a test called a karyotype confirmed the presence of an extra chromosome.)

Could AI someday replace a professional human diagnostician? Just by looking at a subject, Angela Lin, a medical geneticist at Massachusetts General Hospital, can discern a craniofacial syndrome with a high degree of accuracy. She also uses objective methods — measuring the distance between eyes, lips, and nose, for example — for diagnostic confirmation. But this multifaceted technique is not always perfect. That’s why she believes facial recognition software could be useful in her work.

Lin stresses that facial recognition technology is just one of many diagnostic tools, and that in most cases it’s not a substitute for a trained clinical eye. She also worries about how widespread use of facial recognition software could be problematic: “The main barrier for me is privacy concerns. . . we want to be sure the initial image of the person is deleted.”

Autism, for one, may involve physical characteristics too subtle for the human eye to detect. A few months ago, an Australian group published a study that used facial-recognition technology to discern the likelihood of autism using 3-D images of children with and without the condition. As in Kosinski’s study, the computer “learned” the facial commonalities of those with autism and successfully used them as a predictive tool.

The lead study author, Diana Tan, a PhD candidate at University of Western Australia School of Psychological Sciences, warns that the technology has its limitations. A diagnosis of autism requires two distinct elements: identifying social and communication challenges, and behavioral analysis of repetitive behaviors and restrictive interests.

Some scientists believe the social-communication difficulties may be linked to elevated prenatal testosterone — known as the “extreme male brain” theory of autism. Facial masculinization may result from this excessive testosterone exposure, and the computer algorithm was good at picking it up, which could explain its ability to predict autism through a photo alone.

The facial recognition technology was less successful in tracking traits related to severity: that is, repetitive behaviors and restrictive interests. While the computer successfully identified children with autism whose behaviors were marked by lack of empathy, sensitivity, and other typically male traits (i.e. social-communication issues), it was less successful in diagnosing the children who predominantly exhibited restrictive and repetitive behaviors. This suggests that the latter aspects may not be related to hormone exposure and the its related physical changes.

“While [the study] supports the ‘hypermasculine brain theory’ of autism,” Tan says, “it’s not a perfect correlation.”

“In my view,” she says, “[our technique] should be complementary to existing behavioral and development assessments done by a trained doctor, and perhaps one day it could be done much earlier to help evaluate risk,” adding that 3-D prenatal ultrasounds may potentially provide additional data, allowing autism risk to be predicted before birth.

Regardless of the technology’s apparent shortcomings, companies have been quick to leverage big data and facial-recognition capabilities to assist diagnosticians. Boston-based FDNA has been developing technology for use in clinical settings over the last five years and released a mobile app for professionals called Face2Gene in 2014. In principle, it’s similar to the facial recognition software used in Tan’s and Kosinski’s studies, but — more than just study pure science — it’s intended to do what doctors like Lin spend decades learning: make diagnoses of genetic conditions based on facial characteristics.

Last year, the company teamed up on a study to use the app to help with autism diagnoses. The work has not yet been validated in the clinical setting, but it is already gaining adherents.

“We have over 10,000 doctors and geneticists in 120 countries using the technology,” says Jeffrey Daniels, FDNA’s marketing director. “As more people use it, the database expands, which improves its accuracy. And in cases where doctors input additional data” — for instance, information about short stature or cognitive delay, which often helps narrow down a diagnosis — “we can reach up to 88 percent diagnostic accuracy for some conditions.”

Apple, Amazon, and Google have all teamed up with the medical community to try to develop a host of diagnostic tools using the technology. At some point, these companies may know more about your health than you do. Questions abound: Who owns this information, and how will it be used?

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

As the media frenzy around Kosinski’s work has died down over the last few weeks, he is gearing up next to explore whether the same technology can predict political preferences based on facial characteristics. But wouldn’t this just aggravate concerns about discrimination and privacy violations?

“I don’t think so,” he says. “This is the same argument made against our other study.” He then reveals his true goal: “In the long term, instead of fighting technology, which is just providing us with more accurate information, we need solutions to the consequences of having that information. . . like more tolerance and more equality in society,” he says. “The sooner we get down to fixing those things, the better we’ll be able to protect people from privacy or discrimination issues.”

In other words, instead of raging against the facial-recognition machines, we might try to sort through our inherent human biases instead. That’s a much more complex problem that no known algorithm can solve.

**Originally published in the Boston Globe**

Could a VR walk in the woods relieve chronic pain?

Could a VR walk in the woods relieve chronic pain?

When pain researcher Diane Gromala recounts how she started in the field of virtual reality, she seems reflective.

She had been researching virtual reality for pain since the early 1990s, but her shift to focusing on how virtual reality could be used for chronic pain management began in 1999, when her own chronic pain became worse. Prior to that, her focus was on VR as entertainment.

Gromala, 56, was diagnosed with chronic pain in 1984, but the left-sided pain that extended from her lower stomach to her left leg worsened over the next 15 years.

“Taking care of my chronic pain became a full-time job. So at some point I had to make a choice — either stop working or charge full force ahead by making it a motivation for my research. You can guess what I chose,” she said.

Now she’s finding that immersive VR technology may offer another option for chronic pain, which affects at least one in five Canadians, according to a 2011 University of Alberta study.

“We know that there is some evidence supporting immersive VR for acute pain, so it’s reasonable to look into how it could help patients that suffer from chronic pain.”

Gromala has a PhD in human computer interaction and holds the Canada Research Chair in Computational Technologies for Transforming Pain. She also directs the pain studies lab and the Chronic Pain Research Institute at Simon Fraser University in Burnaby, B.C.

Using VR to relieve or treat acute pain has been done for a while.

In the 1990s, researcher Hunter Hoffman conducted one of the earliest studies looking at VR for pain relief in the University of Wisconsin human interface technology lab. His initial focus was burn victims.

Movement and exercise

Since then, the field has expanded. Gromala’s lab focuses on bringing evidence-based therapies that work specifically for chronic pain, such as mindfulness-based stress reduction. They have published studies on their virtual meditative walk to guide and relax patients.

Movement and exercise are a key part of chronic pain management in general. But for many patients, it can be too difficult.

“Through VR we can help create an environment where, with a VR headset, they can feel like they are walking through a forest, all while hearing a guided walking meditation,” Gromala said.

The team also designed a meditation chamber — where a person lies in the enclosed space, breathing becomes more relaxed and a jellyfish viewed through VR dissolves.

Each experiment gives real-time feedback to the patient through objective measures of pain such as skin temperature and heart rate. For instance, while feeling pain, skin surface temperature and heart rate can increase.

While pain medications can be important, chronic pain treatment should also address lifestyle aspects, says Neil Jamensky, a Toronto anesthesiologist and chronic pain specialist.

“Physical rehabilitation therapy, psychological support and optimizing things like nutrition, exercise, sleep and relaxation practices all play key roles in chronic pain management,” he said.

Going global

Other researchers like Sweden’s Dr. Max Ortiz-Catalan from Chalmers University of Technology have looked at virtual and augmented reality for phantom limb pain — the particularly challenging syndrome among amputees who experience pain in a limb that is not physically there.

In his study, published in The Lancet in December 2016, Ortiz-Catalan demonstrated a 47 per cent reduction in symptoms among VR participants.

He believes the reason behind it is a “retraining” of the brain, where pathways in the brain effectively re-route themselves to focus more on movement, for instance.

“We demonstrated that if an amputee can see and manipulate a ‘virtual’ limb — which is projected over their limb stump — in space, over time, the brain retrains these areas.

“Through this retraining, the brain reorganizes itself to focus on motor control and less on pain firing,” said Ortiz-Catalan.

With only 14 patients, this was a pilot study, but he plans to expand the work into a multi-centre, multi-country study later this year. The University of New Brunswick is one of the planned study sites.

There’s an app for this

Others in the United States have published their own findings of VR for chronic pain.

Last month, Ted Jones and colleagues from Knoxville released results of their pilot study of 30 chronic pain patients who were offered five-minute sessions using a VR application called “Cool!” — an immersive VR program administered through a computer and viewed through a head-mounted device.

All reported a decrease in pain while using the app — some decreased by 60 per cent — and post-session pain decreased by 33 per cent. The findings were presented in the journal PLoS.

“What was interesting to observe was that the pain decreased for six to 48 hours post-VR experience. It’s not as long as we would like, but does illustrate that relief can be sustained over some period of time,” Jones said.

His team will be expanding the research this year and will also look at how VR can help with the challenging mental health side-effects of chronic pain.

Next steps

Jamensky points out while VR could be a promising treatment one day, one challenge with clinical trials is the dependence on looking at pain scores when assessing the effectiveness of VR. This may overshadow individual patient goals.

For instance, while the ability to decrease any individual’s pain score from a “seven out of 10” to a “three out of 10” can be challenging, improving functionality and quality of life can often be more valuable to the patient.

“A pain score may not always be the best way to assess treatment success, since the therapeutic goal may not be to eliminate pain or improve this score, but to ensure better sleep, better mobility, improved mood or even an ability to return to work,” he said.

VR as a technology for chronic pain management is in its infancy. Gromala notes that further research, in addition to standardizing the VR delivery devices, is needed before it becomes a standard of care. And future studies must include practical outcomes.

“It is important to realize that the ‘pain’ of chronic pain may never go away, and that ultimately the individual must learn to deal with the pain so that they can function better,” Jamensky said.

Gromala agrees.

For her, developing an awareness for how sleep, mood and exercise affect her own pain experience has made a huge difference.

In fact, it has motivated her to continue both advocating for chronic pain patients and to partner with clinical pain specialists on research.

” ‘Taking care of yourself’ means a different thing for chronic pain sufferers. It’s much tougher,” Gromala said.

“So as researchers we have a big task ahead of us, and sometimes it means exploring whether out-of-the-box methods like VR can help.”

**Originally published on CBC.ca**

For the sake of doctors and patients, we must fix hospital culture

For the sake of doctors and patients, we must fix hospital culture

When hospitals fail to create a culture where doctors and nurses can speak up patients pay the price
By: Blair Bigham and Amitha Kalaichandran.

It seems too often that reporters—not doctors—sound the alarm when systemic problems plague hospitals, where whispers in the shadows indicate widespread concerns, but individuals feel unable to speak up. Recently, reports surfaced that children were dying after surgery at the University of North Carolina at higher than expected rates, despite warnings from doctors about the department’s performance. And whether in Australia, the United Kingdom, Canada, or the United States, reports show that bullying is alive and well.

This pervasive culture—where consultant doctors, residents, and other hospital staff feel that they cannot bring up critically important points of view—must change. It shouldn’t take investigative journalism to fix the culture that permits silence and bullying. But it does take all of us to rethink how physicians and leaders work together to improve hospital culture.

Investing in improving hospital culture makes a difference to patient care and the quality of the learning experience.

Recent studies on workplace culture show how important it is. In a new JAMA Surgery study, surgeons who had several reports of “unprofessional behaviour” (defined as bullying, aggression, and giving false or misleading information) had patient complication rates about 40% higher than surgeons who had none. Domains of professionalism include competence, communication, responsibility, and integrity. Last year, hospital culture was directly linked to patient outcomes in a major study led by Yale School of Public Health scientist Leslie Curry. Risk-standardized mortality rates after a heart attack were higher in hospitals that had a culture that was less collaborative and open.

Curry’s team created a programme to improve hospital culture, namely by enhancing psychological safety—a term that signifies a willingness of caregivers to speak freely about their concerns and ideas. When hospital culture changed for the better, heart attack outcomes drastically improved and death rates fell.

There are examples of good practice where psychological safety and transparency are valued, and these centres often boast better patient outcomes. A recent systematic review of sixty-two studies for instance found fewer deaths, fewer falls, and fewer hospital-acquired infections in healthcare settings that had healthier cultures.

The impact of healthcare workplace culture doesn’t just end with patient safety. Physician retention, as well as job satisfaction and teamwork, all benefit from a strong organizational culture in hospitals. This is crucial at a time where burnout in medicine is high. Hospitals can also learn from the tech industry which discovered early on that psychological safety is key to innovation. In other words, those who are afraid of failing tend not to suggest the bold ideas that lead to great progress.

So how can hospitals make improvements to their culture?

The first thing is to shine a light on the culture by measuring it. Staff surveys and on-site observations can illuminate negative workplace cultures so that boards and executives can consider culture scores in the same regard as wait-times and revenue. Regulators and accreditors could incorporate workplace culture indicators in their frameworks to increase accountability. We recently saw this in Sydney in Australia, where a third residency programme lost its accreditation due to bullying of junior doctors.

The second is to hire talented leaders not based just on their clinical competence, but also on their ability to foster inclusiveness, integrity, empathy, and the ability to inspire. By setting the “tone at the top,” leaders can influence the “mood in the middle,” and chip away at ingrained attitudes that tolerate, or even support, bullying, secrecy, and fear of speaking out.

Another solution rejects the hierarchy historically found between doctors, nurses and patients, and embraces diversity and inclusion. Effective collaboration helps shift the tribe-versus-tribe attitudes towards a team mindset. Part of this involves amplifying ideas from voices that are traditionally not heard: those of women, the disabled, and ethnic and sexual minorities. As well, leadership must change to be more diverse and inclusive, to reflect the patient population.

The field of medicine attracts motivated, intelligent, and caring people. But being a good caregiver and being a good leader are very different, and training in the latter is sadly lacking.

For every investigative report that uncovers a hospital’s culture of silence—whether it’s unacceptable bullying, unusual death rates, or pervasive secrecy—there are surely hundreds more left uncovered. The fix to this global epidemic requires deep self-reflection and a firm commitment to choose leaders who promote transparency and openness. Implicit in the physicians’ vow “to do no harm” is the vow not to stay silent as that too can be harmful. We must first and foremost create cultures that ensure we feel safe to speak up when things aren’t right. Our patients’ lives— and those of our colleagues—depend on it.

**Originally published in the BMJ**

Preventing children from dying in hot cars

One of the biggest lessons I learned a decade ago in public-health graduate school was that education was rarely enough, on its own, to fundamentally change behavior. Educating the public about health was “necessary but not sufficient,” as one of my epidemiology professors had put it. Weight loss, smoking cessation, safe sexual practices — education campaigns weren’t enough.

Decades of educating the public about the dangers of leaving children unattended in cars where the temperature can turn deadly — even on a sunny but not especially hot day — clearly have not been sufficient. The deaths of 11-month-old twins on July 26 in a hot car in the Bronx have brought a fresh sense of urgency to finding innovative technology solutions.

But even before that tragedy, bills had been introduced in Congress earlier this year to address the rising incidence of young children dying in overheated cars.

According to the No Heat Stroke organization, which tracks pediatric heatstroke deaths in vehicles, the average number of such deaths annually since 1998 is 38, with 53 deaths recorded last year — the most ever. Sadly, the nation appears certain to set a record in 2019, with 32 deaths already by the second week of August. The Kids and Cars safety group, another tracker, notes that “over 900 children have died in hot cars nationwide since 1990.”

Fifty-four percent of these victims are 1 year old or younger. In a little more than half of the deaths, children have been mistakenly left alone by their caregiver, in what is known as Forgotten Baby Syndrome. Other children die after climbing into hot cars without an adult’s knowledge, and others have been knowingly, sometimes criminally, left in hot cars.

The American Academy of Pediatrics recommends rear-facing seats for crash-safety reasons and last year removed the age recommendation, focusing instead on height and weight. But there is an immense irony in the safety policy: Rear-facing seats prevent the driver from occasionally making eye contact with the child in the rearview mirror, which would keep the child prominent in the adult’s mind. And when a rear-facing seat is often left in the car, regardless of whether a child is in it, the seat’s presence can be too easily taken for granted.

The father in the New York case said he had accidentally left the twins in rear-facing car seats. (A judge on Aug. 1 paused the pursuit of a criminal case against the twins’ father, pending the results of an investigation.)

As a pediatrics resident physician, I’ve seen hundreds of parents and caregivers of young children, and many are simply overwhelmed, sleep-deprived and vulnerable to making tragic errors. Some parents in high-stress professions may have an additional cognitive load, which can lead to distractions.

The American Academy of Pediatrics suggests several ways to help prevent these tragedies by retraining habits and breaking away from the autopilot mode that often sets in while driving and doing errands. But that’s not enough. The Post noted five years ago that automakers’ promises to use technology to prevent hot-car deaths went unrealized. Liability risks, expense and the lack of clear regulatory guidelines also discouraged innovation. Congressional attempts in recent years to legislate on this front have failed.

That all may be changing, given the rising number of child deaths. The Hot Cars Act of 2019, introduced in the House by Rep. Tim Ryan (D-Ohio), would require all new passenger cars to be “equipped with a child safety alert system.” The bill mandates a “distinct auditory and visual alert to notify individuals inside and outside the vehicle” when the engine has been turned off and motion by an occupant is detected.

The Hyundai Santa Fe and Kia Telluride already offer such technology, which is a welcome step in the right direction. But it would not identify infants who have fallen asleep and lie motionless; these detectors are not typically sensitive enough to detect the rise and fall of a child’s chest during breathing.

The Senate version of the hot cars bill proposes an alert to the driver, when the engine is turned off, if the back door had earlier been opened, offering a reminder that a child may have been placed in a car seat.

The development of sensors for autonomous-vehicle technology is promising — how much harder will it be to alert drivers to people’s presence inside the car, not outside? Other ideas to consider: A back-seat version of the passenger-seat weight sensor that cues seat-belt use, with a lower weight threshold to alert the driver (and loud enough for a passerby to hear) once the engine is shut off. Or try something that doesn’t rely on motion or weight — a carbon-dioxide detector that would sense rising levels (we exhale carbon dioxide, and this rises in a closed and confined space) after the engine is off, sounding an alarm while automatically cooling the vehicle.

No parent of a young child is immune to Forgotten Baby Syndrome — we are all capable of becoming distracted, with terrible consequences. Those who have been devastated by such a loss deserve our sympathy, not our scorn. To avoid future such tragedies, applying technical innovation to passenger vehicles is essential.

**Originally published in the Washington Post**

AI Could Predict Death. But What If the Algorithm Is Biased?

AI Could Predict Death. But What If the Algorithm Is Biased?

Researchers are studying how artificial intelligence could predict risks of premature death. But the health care industry needs to consider another risk: unconscious bias in AI.

 

Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.

The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.

There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the “neural inputs” embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.

Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor’s gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)

In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.

When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.

Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.

We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it’s an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they’re trained on.

Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.

Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.

 

**Originally published in Wired**

Innovation

lightbulb in sand