Some of America’s biggest companies should consider leveraging their logistical capabilities—from using drive-thru windows for screening to turning megastores into diagnostic and treatment centers—as part of their corporate social responsibility, during these dire times.
Dear CEOs of McDonalds, Apple, Nike, and Marriott:
As you probably know, the success of both China and South Korea in decreasing the number of new cases of COVID-19 required both social distancing but also widespread testing and isolation of confirmed cases away from their homes. In other instances, testing even more aggressively made a big difference, and the World Health Organization now strongly recommends expanding COVID19 screening as well as isolation. Italy may have waited too long to implement crucial measures and North America has lagged behind for some time: estimates show that the US is now less than two weeks behind Italy and extremely behind in COVID-19 testing.
Testing is not widely available in the US and Canada, with the spread of misinformation leading symptomatic people to head to their local hospital or family doctor to try to get tested (with limited success while overburdening the system). It’s even more dire knowing that, in New York City for instance, an estimated 80% of ICU beds may already be occupied.
As powerful corporations, I hope you consider leveraging your own logistical capabilities, as part of your corporate social responsibility, during these very dire times—particularly in hotspots like Seattle, San Francisco, Toronto, Vancouver, and New York City. Here are some suggestions for what you can do during these perilous times.
Over the past week, McDonald’s announced they are closing seating. There are over 14,000 McDonald’s in the US alone, most of which have drive-thru windows.
So, my first idea involves pausing fast-food manufacturing for a few weeks in some of these outlets and using the existing drive-thru infrastructure for in-person fever screening (window 1) and COVID-9 throat swabs (window 2, if fever is present). These could be staffed with local nurses (wearing personal protective equipment, or PPE) who might typically work in community clinics that are currently closed. The brand recognition of McDonald’s means that most North Americans would easily be able to locate their nearest franchise. These would effectively serve as “Level 1” screening and diagnostic facilities for the next several weeks, with repeat testing weeks later to assess when an infection has cleared.
Second, over the past week, Apple (which has 272 stores in the US) and Nike (which has 350 stores) have closed their stores. Both of these stores, which maximize negative space and average several thousand square feet (so up to 4.5 million square feet of unused space), have design elements that may help reduce transmission during a pandemic. Some of these stores could be refashioned to serve as “Level 2” diagnostic and treatment centers, for more in-depth diagnoses and assessment of confirmed COVID-19–effectively “cohorting” positive cases together. Also, since both Nike and Apple have longstanding manufacturing relationships with China, with independent shipping and warehouse capabilities, they could help store any donated medical supplies from China and the country’s business leaders. Doctors who are not currently skilled to work in an emergency department or intensive care unit (for instance, most general practitioners) could administer the tests and basic treatment at these sites while wearing appropriate PPE, which offloads the burden on hospitals (which in turn serve as “Level 3” treatment sites for more advanced care). This could work better than military tents.
Third, China’s success in reducing transmission was in large part due to effectively quarantining cases away from their family (so as not to infect other family members). Yet building large quarantine centers, as China did, is not logistically feasible in North America. As such, now that there are fewer travelers, Marriott, which has wide reach across North America, could offer designated hotels in which to isolate the confirmed positives for 14 days to help induce “suppression.”
To be sure, North America should still follow the lead of both Britain and France by harnessing local manufacturing capabilities (which requires a Defense Protection Act), specifically for personal protective equipment like N95 masks, gloves, and gowns for first responders–this is even more crucial given the shortage. However, the bigger challenge will remain logistical. We may even end up having enough expensive equipment like ventilators (which may be used to serve multiple patients) if the milder cases are effectively identified and treated early.
I agree that “brands can’t save us” — but companies can leverage their strengths in collaboration with government. In fact, there have been countless examples from history of corporations pivoting to assist in public health challenges. The most prominent one that comes to mind is Coca-Cola. For decades, Coca-Cola offered its cold chain and other logistical capabilities to assist public health programs to deliver vaccines and antiretroviral medications, because donating money, simply put, just isn’t enough.
Through innovation, you’ve been able to place a thousand songs in our pockets, boast the largest market share of footwear, become the biggest hotel chain in the world, and serve as the most popular fast food company. Facilitating widespread screening, diagnostic testing, and facilitating the safe isolation and treatment of mild-moderate cases is not an impossible feat, especially if you work together with the healthcare system. Instead of allowing your brick-and-mortar businesses to sit idle please consider pivoting towards a solution in collaboration with government, as part of a coordinated and effective pandemic response.
Time is running out.
**Originally published in Fast Company on March 19 2020**
Canadian and international initiatives aim to apply AI to help solve global health conundrums
As we grapple with the coronavirus (COVID-19) pandemic, the pattern of viral spread may have been identified as early as Dec. 31, 2019, by Toronto-based BlueDot.
The group identified an association between a new form of pneumonia in China and a market in Wuhan, China, where animals were being sold and reported the pattern a full week ahead of the World Health Organization (which reported on Jan. 9) and the U.S. Centers for Disease Control and Prevention (which reported it on Jan. 6).
Dr. Kamran Khan, a professor of medicine and public health at the University of Toronto, founded the company in 2014, in large part after his experience as an infectious disease physician during the 2003 SARS epidemic.
The BlueDot team, which consists largely of doctors and programmers, numbering 40 employees, published their work in the Journal of Travel Medicine.
“Our message is that dangerous outbreaks are increasing in frequency, scale, and impact, and infectious diseases spread fast in our highly interconnected world,” Khan wrote via email. “If we want to get in front of these outbreaks, we are going to have to use the resources available to us — data, analytics, and digital technologies — to literally spread knowledge faster than the diseases spread themselves.”
In the past, BlueDot has been able to predict other patterns of disease spread, such as Zika outbreak in south Florida. Now its list of clients includes the Canadian government and health and security departments around the world. They combine AI with human expertise to monitor risk of disease spread for over 150 different diseases and syndromes globally.
BlueDot, as a company, speaks to the emerging trend of using AI for global health.
In India, for instance, Aindra Systems uses AI to assist in screening for cervical cancer. Globally, one woman dies every two minutes due to cervical cancer, and half a million women are newly diagnosed globally each year: 120,000 of these cases occur in India, where rates are increasing in rural areas.
Founded in 2012 by Adarsh Natarajan, the Aindra team recognized that, in India, mortality rates were high in part due to the six-week delay between collecting samples and reading pathology during cervical cancer screening programs. It was also a human resources issue: in India, one pathologist is expected to serve well over 134,000 Indians.
With the aim of reducing the workload burden and fatigue risk (misdiagnosis rates can increase if the reader is tired and overworked), Aindra built CervAstra. The automated program can stain up to 30 slides at a time and then identify, through an AI program called Clustr, the cells that most appear to be cancerous.
The pathologist then spends time on the flagged samples. Much like traditional global health programs, Aindra works closely with several hospitals and local NGOs in India, and hopes their technology may later be adopted by other developing countries.
“Point of care solutions like CervAstra are relevant to a lot of countries who suffer from forms of cancer but don’t have infrastructure or faculties to deal with it in population based screening programs,” Natarajan says.
Natarajan also points to other areas where AI is relevant in global health, such as drug discovery or assisting specific medical specialists in areas like radiology and pathology. Accenture was able to use AI to identify molecules of interest within 10 months as opposed to the typical timeline of up to 10 years.
The Vector Institute, based in Toronto, is also plugging into the potential of AI and global health. It works as an umbrella for several AI startups, some with a health focus and all aiming to have a global impact.
Melissa Judd, director of academic partnerships at Vector Institute, points to the United Nations’ sustainable development goals as a framework upon which to help orient AI towards improving global health. Lyme disease, for instance, is a global health issue that also comes up against the topic of climate change, and recently a Vector-supported AI initiative was able to identify ticks that spread of Lyme disease in Ontario.
Last December, the Vector Institute launched the Global Health and AI Challenge (GHAI) — a collaboration with the Dalla Lana School of Public Health to engage students from across the University of Toronto (from business to epidemiology to engineering) in critical dialogue and problem solving around a global health challenge.
The potential of AI for global health is immense. Major academic journals are also taking note. Last April the Lancet launched the Artificial Intelligence in Global Health report. By looking at 27 cases of how AI has been used in healthcare, editors proposed a framework to help accelerate the cost-effective use of AI in global health, primarily through collaboration between various stakeholders.
As well, a recent commentary in Science identified several key areas of potential for AI and global health, such as low-cost tools powered by AI (for instance an ultrasound powered through a smartphone) and improving data collection during epidemics.
Yet, the authors caution against seeing AI as a panacea and emphasize that empowering local, country-specific, technology talent will be key, as inequitable redistribution of access to AI technology could worsen the rich-poor divide in global health.
This warning aside, Khan with BlueDot is optimistic.
“We are just beginning to scratch the surface as there are many ways that AI can play a key role in global health. As access to data increases in volume, variety and velocity, we will need analytical tools to make sense of these data. AI can play a really important role in augmenting human intelligence,” Khan says.
**Originally published in CBC News**
Two recent US initiatives: the New York Times’ rare disease column and a TBS series called Chasing the Cure are pointing to an emerging trend in the media: the idea that medicine can crowdsource ideas to diagnose difficult cases. But, can it be used to help diagnose patients, and what are the potential pitfalls?
Reaching a correct diagnosis is the crucial aspect of any consultation, but misdiagnosis is common, with some studies suggesting that medical diagnoses can be wrong, up to 43% according to some studies. This concern was the focus of a recent report by the World Health Organization. Individual doctors may overlook something, draw the wrong conclusion, or have their own cognitive biases which means they make the wrong diagnosis. And while hospital rounds, team meetings, and sharing cases with colleagues are ways in which clinicians try to guard against this, medicine could learn from the tech world by applying the principles of “network analysis” to help solve diagnostic dilemmas.
A recent study in JAMA Network Open applied the principle of collective intelligence to see whether combining physician and medical students’ diagnoses improved accuracy. The research, led by Michael Barnett, of the Harvard Chan School of Public Health, in collaboration with the Human Diagnosis Project, used a large data set from the Human Diagnosis Project to determine the accuracy of diagnosis according to level of training: staff physicians, trainees (residents and fellows), and medical students. First, participants were provided with a structured clinical case and were required to submit their differential diagnosis independently. Then the researchers gathered participants into groups of between two and nine to solve cases collectively.
The researchers found that at an individual level, trainees and staff physicians were similar in their diagnostic accuracy. But even though individual accuracy averaged only about 62.5%, it leaped to as high as 85.6% when doctors solved a diagnostic dilemma as a group. The larger the group, which was capped at nine, the more accurate the diagnosis.
The Human Diagnosis Project now incorporates elements of artificial intelligence, which aims to strengthen the impact of crowdsourcing. Several studies have found that when used appropriately, AI has the potential to improve diagnostic accuracy, particularly in fields like radiology and pathology, and there is emerging evidence when it comes to opthamology.
However, an issue with crowdsourcing and sharing patient data is that it’s unclear how securely patient data are stored and whether patient privacy is protected. This is an issue that comes up time and time again, along with how commercial companies may profit from third parties selling these data, even if presented in aggregate.
As such, while crowdsourcing may help reduce medical diagnostic error, sharing patient information widely, even with a medical group, raises important questions around patient consent and confidentiality.
The second issue involves the patient-physician relationship. So far it doesn’t appear that crowdsourcing has a negative impact in this regard. For instance, in one study over half of patients reported benefit from crowdsourcing difficult conditions, however very few studies have explored this particular issue. It’s entirely possible that patients may want to crowdsource management options for instance, and obtain advice that runs counter to their physicians’ and theoretically this could be a source of tension.
The last issue involves consent. A survey, presented at the Society of General Internal Medicine Annual Meeting in 2015, reported that 80% of patients surveyed consented to crowdsourcing, with 43% preferring verbal consent, and 26% preferring written consent (31% said no consent was needed). Some medico-legal recommendations, however, do outline the potential impact on physicians who crowdsource without the appropriate consent, in addition to the possible liabilities around participating in a crowdsourcing platform when their opinion ends up being incorrect. Clearly these are issues that have no clear answer: and we may end up in a position where patients are eager to crowdsource difficult-to-diagnose (and treat) sets of symptoms, but physicians exercise sensible caution.
It’s often said that medical information doubles every few months, and that time is only shortening. Collectively, there’s an enormous amount of medical knowledge and experience both locally and globally that barely gets tapped into when a new patient reaches our doors in any given hospital or clinic. Applying network intelligence to solving the most challenging, as well as the illusory “easy,” diagnosis, may give patients the best of both worlds: the benefit of their doctor’s empathetic care with the experience and intelligence of a collective many, but the potential downsides deserve attention as well.
**Originally published in the British Medical Journal**
There’s more than meets the eye — here are some tips to help avoid confusion.
In August 2019, JAMA Pediatrics, a widely respected journal, published a study with a contentious result: Pregnant women in Canada who were exposed to increasing levels of fluoride (such as from drinking water) were more likely to have children with lower I.Q. Some media outlets ran overblown headlines, claiming that fluoride exposure actually lowers I.Q. And while academics and journalists quickly pointed out the study’s many flaws — that it didn’t prove cause and effect; and showed a drop in I.Q. only in boys, not girls — the damage was done. People took to social media, voicing their concerns about the potential harms of fluoride exposure.
We place immense trust in scientific studies, as well as in the journalists who report on them. But deciding whether a study warrants changing the way we live our lives is challenging. Is that extra hour of screen time really devastating? Does feeding processed meat to children increase their risk of cancer?
As a physician and a medical journalist with training in biostatistics and epidemiology, I sought advice from several experts about how parents can gauge the quality of research studies they read about. Here are eight tips to remember the next time you see a story about a scientific study.
1. Wet pavement doesn’t cause rain.
Put another way, correlation does not equal causation. This is one of the most common traps that health journalists fall into with studies that have found associations between two things — like that people who drink coffee live longer lives — but which haven’t definitively shown that one thing (coffee drinking) causes another (a longer life). These types of studies are typically referred to as observational studies.
When designing and analyzing studies, experts must have satisfactory answers to several questions before determining cause and effect, said Elizabeth Platz, Sc.D., a professor of epidemiology and deputy chair of the department of epidemiology at the Johns Hopkins Bloomberg School of Public Health. In smoking and lung cancer studies, for example, researchers needed to show that the chemicals in cigarettes affected lung tissue in ways that resulted in lung cancer, and that those changes came after the exposure. They also needed to show that those results were reproducible. In many studies, cause and effect isn’t proven after many years, or even decades, of study.
2. Mice aren’t men.
Large human clinical studies are expensive, cumbersome and potentially dangerous to humans. This is why researchers often turn to mice or other animals with human-like physiologies (like flies, worms, rats, dogs and monkeys) first.
If you spot a headline that seems way overblown, like that aspirin thwarts bowel cancer in mice, it’s potentially notable, but could take years or even decades (if ever) to test and see the same findings in humans.
3. Study quality matters.
When it comes to study design, not all are created equal. In medicine, randomized clinical trials and systematic reviews are kings. In a randomized clinical trial, researchers typically split people into at least two groups: one that receives or does the thing the study researchers are testing, like a new drug or daily exercise; and another that receives either the current standard of care (like a statin for high cholesterol) or a placebo. To decrease bias, the participant and researcher ideally won’t know which group each participant is in.
Systematic reviews are similarly useful, in that researchers gather anywhere from five to more than 100 randomized controlled trials on a given subject and comb through them, looking for patterns and consistency among their conclusions. These types of studies are important because they help to show potential consensus in a given body of evidence.
Other types of studies, which aren’t as rigorous as the above, include: cohort studies (which follow large groups of people over time to look for the development of disease), case-control studies (which first identify the disease, like cancer, and then trace back in time to figure out what might have caused it) and cross-sectional studies (which are usually surveys that try to identify how a disease and exposure might have been correlated with each other, but not which caused the other).
Next on the quality spectrum come case reports (which describe what happened to a single patient) and case series (a group of case reports), which are both lowest in quality, but which often inspire higher quality studies.
4. Statistics can be misinterpreted.
Statistical significance is one of the most common things that confuses the lay reader. When a study or a journalistic publication says that a study’s finding was “statistically significant,” it means that the results were unlikely to have happened by chance.
But a result that is statistically significant may not be clinically significant, meaning it likely won’t change your day-to-day. Imagine a randomized controlled trial that split 200 women with migraines into two groups of 100. One was given a pill to prevent migraines and another was given a placebo. After six months, 11 women from the pill group and 12 from the placebo group had at least one migraine per week, but the 11 women in the pill group experienced arm tingling as a potential side effect. If women in the pill group were found to be statistically less likely to have migraines than those in the placebo group, the difference may still be too small to recommend the pill for migraines, since just one woman out of 100 had fewer migraines. Also, researchers would have to take potential side effects into account.
The opposite is also true. If a study reports that regular exercise helped relieve chronic pain symptoms in 30 percent of its participants, that might sound like a lot. But if the study included just 10 people, that’s only three people helped. This finding may not be statistically significant, but could be clinically important, since there are limited treatment options for people with chronic pain, and might warrant a larger trial.
5. Bigger is often better.
Scientists arguably can never fully know the truth about a given topic, but they can get close. And one way of doing that is to design a study that has high power.
“Power is telling us what the chances are that a study will detect a signal, if that signal does exist,” John Ioannidis, M.D., a professor of medicine and health research and policy at Stanford Medical School said via email.
The easiest way for researchers to increase a study’s power is to increase its size. A trial of 1,000 people typically has higher power than a trial of 500, and so on. Simply put, larger studies are more likely to help us get closer to the truth than smaller ones.
6. Not all findings apply to you.
If a news article reports that a high-quality study had statistical and clinical significance, the next step might be to determine whether the findings apply to you.
If researchers are testing a hypothetical new drug to relieve arthritis symptoms, they may only include participants who have arthritis and no other conditions. They may eliminate those who take medications that might interfere with the drug they’re studying. Researchers may recruit participants by age, gender or ethnicity. Early studies on heart disease, for instance, were performed primarily on white men.
Each of us is unique, genetically and environmentally, and our lives aren’t highly controlled like a study. So take each study for what it is: information. Over time, it will become clearer whether one conclusion was important enough to change clinical recommendations. Which gets to a related idea …
7. One study is just one study.
If findings from one study were enough to change medical practices and public policies, doctors would be practicing yo-yo medicine, where recommendations would change from day to day. That doesn’t typically happen, so when you see a headline that begins or ends with, “a study found,” it’s best to remember that one study isn’t likely to shift an entire course of medical practice. If a study is done well and has been replicated, it’s certainly possible that it may change medical guidelines down the line. If the topic is relevant to you or your family, it’s worth asking your doctor whether the findings are strong enough to suggest that you make different health choices.
8. Not all journals are created equal.
Legitimate scientific journals tend to publish studies that have been rigorously and objectively peer reviewed, which is the gold standard for scientific research and publishing. A good way to spot a high quality journal is to look for one with a high impact factor — a number that primarily reflects how often the average article from a given journal has been cited by other articles in a given year. (Keep in mind, however, that lower impact journals can still publish quality findings.) Most studies published on PubMed, a database of published scientific research articles and book chapters, are peer-reviewed.
Then there are so-called ‘predatory’ journals, which aren’t produced by legitimate publishers and which will publish almost any study — whether it’s been peer-reviewed or not — in exchange for a fee. (Legitimate journals may also request fees, primarily to cover their costs or to publish a study in front of a paywall, but only if the paper is accepted.) Predatory journals are attractive to some researchers who may feel pressure to ‘publish or perish.’ It’s challenging, however, to distinguish them from legitimate ones, because they often sound or look similar. If an article has grammatical errors and distorted images, or if its journal lacks a clear editorial board and physical address, it might be a predatory journal. But it’s not always obvious and even experienced researchers are occasionally fooled.
Reading about a study can be enlightening and engaging, but very few studies are profound enough to base changes to your daily life. When you see the next dramatic headline, read the story — and if you can find it, read the study, too (PubMed or Google Scholar are good places to start). If you have time, discuss the study with your doctor and see if any reputable organizations like the Centers for Disease Control and Prevention, World Health Organization, American Academy of Pediatrics, American College of Cardiology or National Cancer Institute have commented on the matter.
Medicine is not an exact science, and things change every day. In a field of gray, where headlines sometimes try to force us to see things in black-and-white, start with these tips to guide your curiosity. And hopefully, they’ll help you decide when — and when not to — make certain health and lifestyle choices for yourself and for your family.
**Originally published in the New York Times**
Earlier this month, in a private imaging clinic in the Ginza district of downtown Tokyo, I lay patiently as the MRI machine buzzed and rattled. I wasn’t there at the request of a doctor, but to screen my brain using a machine learning tool called EIRL, which is named after the Nordic goddess Eir. It’s the latest technology, focused on detecting brain aneurysms, from Tokyo-based LPixel, one of Japan’s largest companies working on artificial intelligence for healthcare. Brain aneurysms occur when a blood vessel swells up like a balloon. If it bursts, it can be deadly.
After the MRI, the images get uploaded onto a secure cloud, and EIRL begins its analysis looking for abnormalities. Each scan is then checked by a radiologist followed by a neurosurgeon. The final report, with the images, is produced within 10 days and accessible through a secure portal.
While LPixel offers a number of other A.I. tools to assist with CAT scans, X-rays, real-time colonoscopy images, and research image analysis, the EIRL for brain aneurysm detection remains their most advanced offering. The EIRL algorithm was built upon data extracted from over 1,000 images with confirmed brain aneurysms, in partnership with four Japanese universities, including the University of Tokyo and Osaka City University. Data from a 2019 study by LPixel and their partner universities found EIRL for brain aneurysms had a high sensitivity of between 91 and 93% (sensitivity refers to the likelihood of detecting an aneurysm if one is indeed present).
Mariko Takahashi, project manager with LPixel, explains that EIRL differs from computer-assisted devices in that there is a learning component: “EIRL becomes more accurate the more it’s used,” she says. According to Takahashi, EIRL has detected cases of aneurysms that require immediate medical attention, even though the patients displayed no symptoms.
The EIRL for brain aneurysms algorithm was approved by the Japanese Pharmaceutical and Medical Devices Agency (PMDA) in the category of software as a medical device in Japan in September. The algorithm is based entirely on Japanese patients, but it could be generalized to other populations, says Takahashi, though she notes that their group is looking into studies showing that the Japanese anatomy of brain vessels may vary slightly from other ethnic groups and whether the algorithm would therefore need to be validated in other populations.
EIRL does have competitors. A Korean startup called Deepnoid is developing a brain aneurysm detection tool using MRI. Also, GE Healthcare is using brain CT to detect aneurysms. Lastly, Stanford is positioning itself to use deep learning in brain CTs to detect brain aneurysms, though it appears to be intended for diagnosis, not screening. Competitors in Belgium and China as well are using AI to detect brain tumors.
LPixel hopes to have FDA approval for EIRL in the U.S. in 2020 and is working to ensure it meets HIPPA compliance regulations and privacy and security.
But just because you might soon be able to get AI-assisted screening for your brain, should you?
It’s a complicated and very personal question. In the U.S. and Canada, there is a push to reduce unnecessary testing, which includes limiting screening tests to those that are inexpensive and have been shown to reduce the likelihood of disease, such as breast cancer and colon cancer. Currently in the U.S., Canada, and U.K., there is no recommended population-wide screening program for brain aneurysms, and the American College of Radiology recommends that head and neck MRIs be limited to situations where there are symptoms suggesting a pathology such as a tumor, or for cases where there may be brain metastasis of another cancer (such as breast cancer).
There are dangers to overscreening, particularly when it comes to the brain: for one, the possibility of unnecessary and invasive testing. In essence: when you go hunting for abnormalities in the brain, you might find things you didn’t expect to uncover—for example, an “incidentaloma,” which is a lesion that isn’t necessarily harmful or may just be a normal variation in human anatomy. These can occur in up to one-third of healthy patients. The harm involved in investigating these, such as the risk of infection when obtaining a sample, can outweigh the benefits.
However, those who are at high risk of aneurysms, such as those with a family history, may warrant screening. Notably, in Japan, brain aneurysms are more common compared to other populations, an issue that may also be muddied by the fact that more people choose to be screened for it. They may also be more likely to rupture. And MRI screening in Japan is less expensive: roughly $200-$300 for a head MRI, which is around 50-75% less than in North America.
Dr. Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, shares these sentiments. “There’s no question AI will help accuracy of brain image interpretation (meaning the fusion of machine and neuroradiologist, complementary expertise) but there are drawbacks such as the lack of prospective studies in the real clinical world environment; potential for algorithmic malware and glitches, and many more, which I reviewed in the ‘Deep Liabilities’ chapter of my book,” Topol says. “Personally I do not see the benefit to using AI technology for ‘screening’ of brain aneurysms at this time, as there’s no data or evidence to support the benefit, at least in patients without relevant symptoms.”
That said, if the algorithm is validated for populations outside Japan, there could be potential in diagnostic situations, for instance in hospitals as opposed to private clinics, as well as for high-risk individuals who need screening. And that’s where the company seems to be headed.
“Right now we’re exploring how to best roll out technology in hospitals in Japan, in collaboration with our partners,” Takahashi says.
As for me, I received my results about 9 days later, and—assuming the translation from Japanese to English was accurate—according to EIRL, there were no abnormalities.
**Originally published in Fast Company**
“So, if we’re worried about viral myocarditis, would the patient have similar symptoms as someone with pericarditis?” The astute medical student slipped me his question as we hurriedly made our way across the ward to the next patient’s room.
He had wondered whether inflammation of the heart muscle (as in myocarditis) presents like inflammation of the protective layer around the heart (the pericardium). Classically we are taught that pericarditis-type chest pain is better when sitting up (because the protective layer is kept away from the nerves that transmit pain) compared with lying down or when taking deep breaths.
“Well there is some overlap in clinical signs,” I began. But we were already on to the next patient, and so my attention was redirected. The student had looked eager to hear my response, but that expression quickly slipped away.
These missed opportunities, to explore and address complex questions, are frequent in medical education, and the downstream consequences of not fostering this curiosity are significant.
Curiosity is the necessary fuel to rethink one’s own biases, and it can reap dividends for patient care. When doctors think about a set of symptoms separately, they may reach different conclusions; for example one study found that up to 21% of second opinions differ from the original diagnosis.
Allowing doctors to express their curiosity is crucial and it’s time we encourage all medical trainees to be curious.
The decline in curiosity could be caused, in part, by medical trainees assuming a traditionally passive role in hierarchically organized settings like hospitals, suggests a 2011 paper, coauthored by Ronald Epstein, MD, a professor of family medicine, psychiatry, oncology and medicine at the University of Rochester Medical Center.
“There’s a dynamic tension here. People pursue medicine because they are curious about the human experience and scientific discovery, but early in training they are taught to place things in categories and to pursue certainty,” Epstein told me.
A 2017 McGill University study led by pediatrician Robert Sternzus, MD, took this theme a step further. Sternzus and colleagues surveyed medical students across all four years about two types of curiosity: trait curiosity, which is an inherent tendency to be curious; and state curiosity, defined as the environment in which the trait curiosity can survive. Trait curiosity across all four years was significantly higher than state curiosity. The authors concluded that the medical students’ natural curiosity may not have been supported in their learning environment.
“I had always felt that curiosity was strongly linked to performance in the students I worked with,” Sternzus says. “I also felt, as a learner, that I was at my best when I was most curious. And I certainly could remember periods in my training where that curiosity was suppressed. In our study the trends that we found with regards to curiosity across the years confirmed what I had hypothesized.” Sternzus has since spearheaded a faculty development workshop on promoting curiosity in medical trainees.
So what might be the solution, especially as the move towards competency-based training programs may not reward curiosity, and at a time where companies in places like Silicon Valley — which invest in curious and talented minds — position themselves to be another gatekeeper of health care?
New work led by Jatin Vyas, MD, PhD, an infectious disease physician and researcher who directs the internal medicine residence at Massachusetts General Hospital, offers one idea. His team developed a two-week elective program, called Pathways, which allows an intern to investigate a case where the diagnosis is unknown or the science isn’t quite clear. They then present their findings to a group of up to 80 experienced physicians and trainees.
“What I have found is that many interns and residents have lots of important questions. If our attendings are not in tune with that — and it’s often due to a lack of time or expertise — the residents’ questions are oftentimes never discussed,” Vyas says. “When I was a resident, my mentors helped me articulate these important questions, and I believe this new generation of trainees deserve the same type of stimulation and the Pathways elective is one way to help address this.”
At the end of June, Pathways reached the end of its second year, and Vyas recounts that resident satisfaction, clinical-teacher satisfaction, and patient satisfaction were all high. “Patients have expressed gratitude for having trainees eager to take a fresh look at their case, even though they may not receive a breakthrough answer,” Vyas says.
The job of more experienced clinicians is to nurture curiosity of learners not just for the value it provides for the students, but for the benefits it poses for patients, Faith Fitzgerald, MD, an internist at the University of California Davis, has written. Physicians of the future, and the patients they care for, deserve this.
**Originally published in the Stanford Medicine Scope Blog**
A few months ago, the Centers for Disease Control and Prevention published a report about a young boy from Connecticut who developed lead poisoning as a direct result of his parents giving him a magnetic healing bracelet for teething. It seems every few months a story will cover a tragic case of a parent choosing an unconventional medical treatment that causes harm.
More often, the alternative treatments parents choose pose little risk to their kids — anything from massage therapy to mind-body therapies like mindfulness meditation and guided imagery. Research indicates that overall, there are few serious adverse events related to using alternative therapies. But when they do occur, they can be catastrophic, in some cases because caregivers or alternative care providers are poorly informed on how to recognize the signs of serious illness.
The National Center for Complementary and Integrative Health, part of the National Institutes of Health, now refers to these alternative treatments as complementary health approaches, or C.H.A. They are defined as “a group of diverse medical and health care systems, practices and products not presently considered to be part of conventional Western medicine.” In some cases they complement traditional care. In others they are used in place of standard medical practices.
It’s a polarizing subject that unfortunately gets muddled with conversations about anti-vaccination. But while some anti-vaxxers use complementary health approaches, people who use C.H.A. don’t necessarily doubt vaccine effectiveness.
What’s less clear is the proportion of parents choosing complementary health approaches for their children, for what conditions, and their perceptions of effectiveness. We also know very little about parents’ willingness to discuss their use with their child’s doctor, and most doctors receive little training in C.H.A. use, especially in children, and how to counsel parents about it.
To explore these questions, we surveyed parents in a busy emergency room in eastern Ontario, Canada. As reported in our recent study, just over 60 percent said they gave their child a C.H.A. within the last year. Vitamins and minerals (59 percent) were the most common ingested treatment, and half the parents used massage. Our research found that parents with a university-level education were more likely to use a complementary treatment than those with less education.
Parents also perceived most of the C.H.A. that they used — from vitamins and minerals to aromatherapy to massage — as effective. However, less than half of parents felt that homeopathy or special jewelry would be helpful.
As reported in our recent paper, we then asked parents if they had tried a complementary therapy for the problem at hand before they came to the emergency room. Just under one-third reported using C.H.A. for a specific condition, most often for gastrointestinal complaints. Interestingly, in the case of emergency care, there was no correlation with the parents’ level of education.
In work we previously presented at the International Congress of Pediatrics, we asked these parents whether they believed their provider — a nurse practitioner or a doctor — was knowledgeable about complementary medicine. About 70 percent believed their health provider was knowledgeable about C.H.A., although this perception was less likely among parents with a university-level education. Surprisingly, 88 percent said they felt comfortable discussing their use of C.H.A. with their medical provider.
Previous reports have found that only between 40 percent and 76 percent actually disclose C.H.A. use with their doctor. In our study, we were talking to parents who had brought their child to an emergency room, where they would be more likely to talk about whatever treatments they had tried. In many cases, parents may refrain from even taking their child to the doctor if their problem is not a serious one. So it is likely that the overall proportion of parents who use C.H.A. for their children is an underestimate.
Our findings underscore the need for parents and their child’s health providers to have more open conversations about what they are giving to their child for health reasons.
Medical providers also need to be actively asking whether C.H.A. is used and stay up-to-date on current evidence about complementary therapies, including potential interactions with any medications they may also be taking. Much of this information is summarized on the N.C.C.I.H. website.
Here are some ways parents can approach the issue of alternative therapies with their doctors:
■ Write down everything your child is using as though it’s a medication. Include any special diets, teas and visits to other complementary medicine providers.
■ Keep track of any positive and negative results from C.H.A. that you notice —- including no effect — and the cost involved
■ If your child’s health provider doesn’t ask about C.H.A., start the conversation.
Physicians and other medical providers should:
■ Learn more about these treatments and the evidence behind them. The N.C.C.I.H. is a good place to start.
■ Try not to be judgmental; causing a rift with a parent because you might not agree with their choices may cause a breakdown in the therapeutic relationship.
■ Evaluate risks and benefits, and be aware of what is unknown about the specific C.H.A. being used. Make efforts to learn more about the therapy and take action if there are clear side effects and risks, documenting the discussion where appropriate.
Parents and doctors are on the same team when it comes to caring for a child’s health. Taking time to explore what parents and children are using, including any therapies that lie outside the scope of conventional medical practice, provides an opportunity to have open and honest discussions about risk, benefits and safety around complementary health approaches.
**Originally published in the New York Times**
Stanford Graduate School of Business researcher Michal Kosinski set out to answer the latter question in a controversial new study. Using a deep-learning algorithm, Kosinski and his colleagues inputted thousands of photos of white Americans who self-identified as either gay or straight, and tagged them accordingly. The software then learned physical commonalities — micro quantitative differences based on facial measurements — to distinguish gay from straight features.
His team found that the computer had astonishingly accurate “gaydar,” though it was slightly better at identifying gay men (81 percent accuracy) than lesbians (74 percent accuracy). Notably, the software outperformed human judges in the study by a wide margin.
Kosinski’s work was based on previous but controversial research that suggests that the hormonal balance in the womb influences sexual orientation as well as appearance. “Data suggests that [certain groups of] people share some facial characteristics that are so subtle as to be imperceptible to the human eye,” Kosinski says. The study, according to Kosinski, merely tested that theory using a respected algorithm developed by Oxford Vision Lab.
Predictably, rights groups, including GLAAD and Human Rights Campaign, were outraged by Kosinski’s study, simultaneously questioning his methods while suggesting that his program was a threat to members of the gay community.
Kosinski is known as both a researcher and a provocateur. He says that one of the goals for the study was to warn us of the dangers of artificial intelligence. He designed his research, he says, to goad us into taking privacy issues around machine learning more seriously. Could AI “out” people in any number of ways, making them targets of discrimination?
But for the sake of argument, let’s suppose that facial-recognition technology will keep improving, and that machines may someday be able to quickly detect a variety of characteristics — from homosexuality to autism — that the unaided human eye cannot. What would it mean for society if highly personal aspects of our lives were written on our faces?
I remember the first time I saw a baby with the condition, which appears in patients who have a third copy of chromosome 21, instead of the usual pair. The infant was born in a community hospital to a mother who had declined genetic screening. As he lay in his cot a few hours after birth, his up-slanted “palpebral fissures” (eyelid openings) and “short philtrum” (groove in the upper lip), among many other things, seemed subtle. It only took a glance from my attending, an experienced pediatrician, to know that the diagnosis was likely. (Later on, a test called a karyotype confirmed the presence of an extra chromosome.)
Could AI someday replace a professional human diagnostician? Just by looking at a subject, Angela Lin, a medical geneticist at Massachusetts General Hospital, can discern a craniofacial syndrome with a high degree of accuracy. She also uses objective methods — measuring the distance between eyes, lips, and nose, for example — for diagnostic confirmation. But this multifaceted technique is not always perfect. That’s why she believes facial recognition software could be useful in her work.
Lin stresses that facial recognition technology is just one of many diagnostic tools, and that in most cases it’s not a substitute for a trained clinical eye. She also worries about how widespread use of facial recognition software could be problematic: “The main barrier for me is privacy concerns. . . we want to be sure the initial image of the person is deleted.”
Autism, for one, may involve physical characteristics too subtle for the human eye to detect. A few months ago, an Australian group published a study that used facial-recognition technology to discern the likelihood of autism using 3-D images of children with and without the condition. As in Kosinski’s study, the computer “learned” the facial commonalities of those with autism and successfully used them as a predictive tool.
The lead study author, Diana Tan, a PhD candidate at University of Western Australia School of Psychological Sciences, warns that the technology has its limitations. A diagnosis of autism requires two distinct elements: identifying social and communication challenges, and behavioral analysis of repetitive behaviors and restrictive interests.
Some scientists believe the social-communication difficulties may be linked to elevated prenatal testosterone — known as the “extreme male brain” theory of autism. Facial masculinization may result from this excessive testosterone exposure, and the computer algorithm was good at picking it up, which could explain its ability to predict autism through a photo alone.
The facial recognition technology was less successful in tracking traits related to severity: that is, repetitive behaviors and restrictive interests. While the computer successfully identified children with autism whose behaviors were marked by lack of empathy, sensitivity, and other typically male traits (i.e. social-communication issues), it was less successful in diagnosing the children who predominantly exhibited restrictive and repetitive behaviors. This suggests that the latter aspects may not be related to hormone exposure and the its related physical changes.
“While [the study] supports the ‘hypermasculine brain theory’ of autism,” Tan says, “it’s not a perfect correlation.”
“In my view,” she says, “[our technique] should be complementary to existing behavioral and development assessments done by a trained doctor, and perhaps one day it could be done much earlier to help evaluate risk,” adding that 3-D prenatal ultrasounds may potentially provide additional data, allowing autism risk to be predicted before birth.
Regardless of the technology’s apparent shortcomings, companies have been quick to leverage big data and facial-recognition capabilities to assist diagnosticians. Boston-based FDNA has been developing technology for use in clinical settings over the last five years and released a mobile app for professionals called Face2Gene in 2014. In principle, it’s similar to the facial recognition software used in Tan’s and Kosinski’s studies, but — more than just study pure science — it’s intended to do what doctors like Lin spend decades learning: make diagnoses of genetic conditions based on facial characteristics.
Last year, the company teamed up on a study to use the app to help with autism diagnoses. The work has not yet been validated in the clinical setting, but it is already gaining adherents.
“We have over 10,000 doctors and geneticists in 120 countries using the technology,” says Jeffrey Daniels, FDNA’s marketing director. “As more people use it, the database expands, which improves its accuracy. And in cases where doctors input additional data” — for instance, information about short stature or cognitive delay, which often helps narrow down a diagnosis — “we can reach up to 88 percent diagnostic accuracy for some conditions.”
Apple, Amazon, and Google have all teamed up with the medical community to try to develop a host of diagnostic tools using the technology. At some point, these companies may know more about your health than you do. Questions abound: Who owns this information, and how will it be used?
Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”
Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.
“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.
As the media frenzy around Kosinski’s work has died down over the last few weeks, he is gearing up next to explore whether the same technology can predict political preferences based on facial characteristics. But wouldn’t this just aggravate concerns about discrimination and privacy violations?
“I don’t think so,” he says. “This is the same argument made against our other study.” He then reveals his true goal: “In the long term, instead of fighting technology, which is just providing us with more accurate information, we need solutions to the consequences of having that information. . . like more tolerance and more equality in society,” he says. “The sooner we get down to fixing those things, the better we’ll be able to protect people from privacy or discrimination issues.”
In other words, instead of raging against the facial-recognition machines, we might try to sort through our inherent human biases instead. That’s a much more complex problem that no known algorithm can solve.
**Originally published in the Boston Globe**
When pain researcher Diane Gromala recounts how she started in the field of virtual reality, she seems reflective.
She had been researching virtual reality for pain since the early 1990s, but her shift to focusing on how virtual reality could be used for chronic pain management began in 1999, when her own chronic pain became worse. Prior to that, her focus was on VR as entertainment.
Gromala, 56, was diagnosed with chronic pain in 1984, but the left-sided pain that extended from her lower stomach to her left leg worsened over the next 15 years.
“Taking care of my chronic pain became a full-time job. So at some point I had to make a choice — either stop working or charge full force ahead by making it a motivation for my research. You can guess what I chose,” she said.
Now she’s finding that immersive VR technology may offer another option for chronic pain, which affects at least one in five Canadians, according to a 2011 University of Alberta study.
“We know that there is some evidence supporting immersive VR for acute pain, so it’s reasonable to look into how it could help patients that suffer from chronic pain.”
Gromala has a PhD in human computer interaction and holds the Canada Research Chair in Computational Technologies for Transforming Pain. She also directs the pain studies lab and the Chronic Pain Research Institute at Simon Fraser University in Burnaby, B.C.
Using VR to relieve or treat acute pain has been done for a while.
In the 1990s, researcher Hunter Hoffman conducted one of the earliest studies looking at VR for pain relief in the University of Wisconsin human interface technology lab. His initial focus was burn victims.
Movement and exercise
Since then, the field has expanded. Gromala’s lab focuses on bringing evidence-based therapies that work specifically for chronic pain, such as mindfulness-based stress reduction. They have published studies on their virtual meditative walk to guide and relax patients.
Movement and exercise are a key part of chronic pain management in general. But for many patients, it can be too difficult.
“Through VR we can help create an environment where, with a VR headset, they can feel like they are walking through a forest, all while hearing a guided walking meditation,” Gromala said.
The team also designed a meditation chamber — where a person lies in the enclosed space, breathing becomes more relaxed and a jellyfish viewed through VR dissolves.
Each experiment gives real-time feedback to the patient through objective measures of pain such as skin temperature and heart rate. For instance, while feeling pain, skin surface temperature and heart rate can increase.
While pain medications can be important, chronic pain treatment should also address lifestyle aspects, says Neil Jamensky, a Toronto anesthesiologist and chronic pain specialist.
“Physical rehabilitation therapy, psychological support and optimizing things like nutrition, exercise, sleep and relaxation practices all play key roles in chronic pain management,” he said.
Other researchers like Sweden’s Dr. Max Ortiz-Catalan from Chalmers University of Technology have looked at virtual and augmented reality for phantom limb pain — the particularly challenging syndrome among amputees who experience pain in a limb that is not physically there.
In his study, published in The Lancet in December 2016, Ortiz-Catalan demonstrated a 47 per cent reduction in symptoms among VR participants.
He believes the reason behind it is a “retraining” of the brain, where pathways in the brain effectively re-route themselves to focus more on movement, for instance.
“We demonstrated that if an amputee can see and manipulate a ‘virtual’ limb — which is projected over their limb stump — in space, over time, the brain retrains these areas.
“Through this retraining, the brain reorganizes itself to focus on motor control and less on pain firing,” said Ortiz-Catalan.
With only 14 patients, this was a pilot study, but he plans to expand the work into a multi-centre, multi-country study later this year. The University of New Brunswick is one of the planned study sites.
There’s an app for this
Others in the United States have published their own findings of VR for chronic pain.
Last month, Ted Jones and colleagues from Knoxville released results of their pilot study of 30 chronic pain patients who were offered five-minute sessions using a VR application called “Cool!” — an immersive VR program administered through a computer and viewed through a head-mounted device.
All reported a decrease in pain while using the app — some decreased by 60 per cent — and post-session pain decreased by 33 per cent. The findings were presented in the journal PLoS.
“What was interesting to observe was that the pain decreased for six to 48 hours post-VR experience. It’s not as long as we would like, but does illustrate that relief can be sustained over some period of time,” Jones said.
His team will be expanding the research this year and will also look at how VR can help with the challenging mental health side-effects of chronic pain.
Jamensky points out while VR could be a promising treatment one day, one challenge with clinical trials is the dependence on looking at pain scores when assessing the effectiveness of VR. This may overshadow individual patient goals.
For instance, while the ability to decrease any individual’s pain score from a “seven out of 10” to a “three out of 10” can be challenging, improving functionality and quality of life can often be more valuable to the patient.
“A pain score may not always be the best way to assess treatment success, since the therapeutic goal may not be to eliminate pain or improve this score, but to ensure better sleep, better mobility, improved mood or even an ability to return to work,” he said.
VR as a technology for chronic pain management is in its infancy. Gromala notes that further research, in addition to standardizing the VR delivery devices, is needed before it becomes a standard of care. And future studies must include practical outcomes.
“It is important to realize that the ‘pain’ of chronic pain may never go away, and that ultimately the individual must learn to deal with the pain so that they can function better,” Jamensky said.
For her, developing an awareness for how sleep, mood and exercise affect her own pain experience has made a huge difference.
In fact, it has motivated her to continue both advocating for chronic pain patients and to partner with clinical pain specialists on research.
” ‘Taking care of yourself’ means a different thing for chronic pain sufferers. It’s much tougher,” Gromala said.
“So as researchers we have a big task ahead of us, and sometimes it means exploring whether out-of-the-box methods like VR can help.”
**Originally published on CBC.ca**
When hospitals fail to create a culture where doctors and nurses can speak up patients pay the price
By: Blair Bigham and Amitha Kalaichandran.
It seems too often that reporters—not doctors—sound the alarm when systemic problems plague hospitals, where whispers in the shadows indicate widespread concerns, but individuals feel unable to speak up. Recently, reports surfaced that children were dying after surgery at the University of North Carolina at higher than expected rates, despite warnings from doctors about the department’s performance. And whether in Australia, the United Kingdom, Canada, or the United States, reports show that bullying is alive and well.
This pervasive culture—where consultant doctors, residents, and other hospital staff feel that they cannot bring up critically important points of view—must change. It shouldn’t take investigative journalism to fix the culture that permits silence and bullying. But it does take all of us to rethink how physicians and leaders work together to improve hospital culture.
Investing in improving hospital culture makes a difference to patient care and the quality of the learning experience.
Recent studies on workplace culture show how important it is. In a new JAMA Surgery study, surgeons who had several reports of “unprofessional behaviour” (defined as bullying, aggression, and giving false or misleading information) had patient complication rates about 40% higher than surgeons who had none. Domains of professionalism include competence, communication, responsibility, and integrity. Last year, hospital culture was directly linked to patient outcomes in a major study led by Yale School of Public Health scientist Leslie Curry. Risk-standardized mortality rates after a heart attack were higher in hospitals that had a culture that was less collaborative and open.
Curry’s team created a programme to improve hospital culture, namely by enhancing psychological safety—a term that signifies a willingness of caregivers to speak freely about their concerns and ideas. When hospital culture changed for the better, heart attack outcomes drastically improved and death rates fell.
There are examples of good practice where psychological safety and transparency are valued, and these centres often boast better patient outcomes. A recent systematic review of sixty-two studies for instance found fewer deaths, fewer falls, and fewer hospital-acquired infections in healthcare settings that had healthier cultures.
The impact of healthcare workplace culture doesn’t just end with patient safety. Physician retention, as well as job satisfaction and teamwork, all benefit from a strong organizational culture in hospitals. This is crucial at a time where burnout in medicine is high. Hospitals can also learn from the tech industry which discovered early on that psychological safety is key to innovation. In other words, those who are afraid of failing tend not to suggest the bold ideas that lead to great progress.
So how can hospitals make improvements to their culture?
The first thing is to shine a light on the culture by measuring it. Staff surveys and on-site observations can illuminate negative workplace cultures so that boards and executives can consider culture scores in the same regard as wait-times and revenue. Regulators and accreditors could incorporate workplace culture indicators in their frameworks to increase accountability. We recently saw this in Sydney in Australia, where a third residency programme lost its accreditation due to bullying of junior doctors.
The second is to hire talented leaders not based just on their clinical competence, but also on their ability to foster inclusiveness, integrity, empathy, and the ability to inspire. By setting the “tone at the top,” leaders can influence the “mood in the middle,” and chip away at ingrained attitudes that tolerate, or even support, bullying, secrecy, and fear of speaking out.
Another solution rejects the hierarchy historically found between doctors, nurses and patients, and embraces diversity and inclusion. Effective collaboration helps shift the tribe-versus-tribe attitudes towards a team mindset. Part of this involves amplifying ideas from voices that are traditionally not heard: those of women, the disabled, and ethnic and sexual minorities. As well, leadership must change to be more diverse and inclusive, to reflect the patient population.
The field of medicine attracts motivated, intelligent, and caring people. But being a good caregiver and being a good leader are very different, and training in the latter is sadly lacking.
For every investigative report that uncovers a hospital’s culture of silence—whether it’s unacceptable bullying, unusual death rates, or pervasive secrecy—there are surely hundreds more left uncovered. The fix to this global epidemic requires deep self-reflection and a firm commitment to choose leaders who promote transparency and openness. Implicit in the physicians’ vow “to do no harm” is the vow not to stay silent as that too can be harmful. We must first and foremost create cultures that ensure we feel safe to speak up when things aren’t right. Our patients’ lives— and those of our colleagues—depend on it.
**Originally published in the BMJ**