Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Stanford Graduate School of Business researcher Michal Kosinski set out to answer the latter question in a controversial new study. Using a deep-learning algorithm, Kosinski and his colleagues inputted thousands of photos of white Americans who self-identified as either gay or straight, and tagged them accordingly. The software then learned physical commonalities — micro quantitative differences based on facial measurements — to distinguish gay from straight features.

His team found that the computer had astonishingly accurate “gaydar,” though it was slightly better at identifying gay men (81 percent accuracy) than lesbians (74 percent accuracy). Notably, the software outperformed human judges in the study by a wide margin.

Kosinski’s work was based on previous but controversial research that suggests that the hormonal balance in the womb influences sexual orientation as well as appearance. “Data suggests that [certain groups of] people share some facial characteristics that are so subtle as to be imperceptible to the human eye,” Kosinski says. The study, according to Kosinski, merely tested that theory using a respected algorithm developed by Oxford Vision Lab.

Predictably, rights groups, including GLAAD and Human Rights Campaign, were outraged by Kosinski’s study, simultaneously questioning his methods while suggesting that his program was a threat to members of the gay community.

Kosinski is known as both a researcher and a provocateur. He says that one of the goals for the study was to warn us of the dangers of artificial intelligence. He designed his research, he says, to goad us into taking privacy issues around machine learning more seriously. Could AI “out” people in any number of ways, making them targets of discrimination?

But for the sake of argument, let’s suppose that facial-recognition technology will keep improving, and that machines may someday be able to quickly detect a variety of characteristics — from homosexuality to autism — that the unaided human eye cannot. What would it mean for society if highly personal aspects of our lives were written on our faces?

I remember the first time I saw a baby with the condition, which appears in patients who have a third copy of chromosome 21, instead of the usual pair. The infant was born in a community hospital to a mother who had declined genetic screening. As he lay in his cot a few hours after birth, his up-slanted “palpebral fissures” (eyelid openings) and “short philtrum” (groove in the upper lip), among many other things, seemed subtle. It only took a glance from my attending, an experienced pediatrician, to know that the diagnosis was likely. (Later on, a test called a karyotype confirmed the presence of an extra chromosome.)

Could AI someday replace a professional human diagnostician? Just by looking at a subject, Angela Lin, a medical geneticist at Massachusetts General Hospital, can discern a craniofacial syndrome with a high degree of accuracy. She also uses objective methods — measuring the distance between eyes, lips, and nose, for example — for diagnostic confirmation. But this multifaceted technique is not always perfect. That’s why she believes facial recognition software could be useful in her work.

Lin stresses that facial recognition technology is just one of many diagnostic tools, and that in most cases it’s not a substitute for a trained clinical eye. She also worries about how widespread use of facial recognition software could be problematic: “The main barrier for me is privacy concerns. . . we want to be sure the initial image of the person is deleted.”

Autism, for one, may involve physical characteristics too subtle for the human eye to detect. A few months ago, an Australian group published a study that used facial-recognition technology to discern the likelihood of autism using 3-D images of children with and without the condition. As in Kosinski’s study, the computer “learned” the facial commonalities of those with autism and successfully used them as a predictive tool.

The lead study author, Diana Tan, a PhD candidate at University of Western Australia School of Psychological Sciences, warns that the technology has its limitations. A diagnosis of autism requires two distinct elements: identifying social and communication challenges, and behavioral analysis of repetitive behaviors and restrictive interests.

Some scientists believe the social-communication difficulties may be linked to elevated prenatal testosterone — known as the “extreme male brain” theory of autism. Facial masculinization may result from this excessive testosterone exposure, and the computer algorithm was good at picking it up, which could explain its ability to predict autism through a photo alone.

The facial recognition technology was less successful in tracking traits related to severity: that is, repetitive behaviors and restrictive interests. While the computer successfully identified children with autism whose behaviors were marked by lack of empathy, sensitivity, and other typically male traits (i.e. social-communication issues), it was less successful in diagnosing the children who predominantly exhibited restrictive and repetitive behaviors. This suggests that the latter aspects may not be related to hormone exposure and the its related physical changes.

“While [the study] supports the ‘hypermasculine brain theory’ of autism,” Tan says, “it’s not a perfect correlation.”

“In my view,” she says, “[our technique] should be complementary to existing behavioral and development assessments done by a trained doctor, and perhaps one day it could be done much earlier to help evaluate risk,” adding that 3-D prenatal ultrasounds may potentially provide additional data, allowing autism risk to be predicted before birth.

Regardless of the technology’s apparent shortcomings, companies have been quick to leverage big data and facial-recognition capabilities to assist diagnosticians. Boston-based FDNA has been developing technology for use in clinical settings over the last five years and released a mobile app for professionals called Face2Gene in 2014. In principle, it’s similar to the facial recognition software used in Tan’s and Kosinski’s studies, but — more than just study pure science — it’s intended to do what doctors like Lin spend decades learning: make diagnoses of genetic conditions based on facial characteristics.

Last year, the company teamed up on a study to use the app to help with autism diagnoses. The work has not yet been validated in the clinical setting, but it is already gaining adherents.

“We have over 10,000 doctors and geneticists in 120 countries using the technology,” says Jeffrey Daniels, FDNA’s marketing director. “As more people use it, the database expands, which improves its accuracy. And in cases where doctors input additional data” — for instance, information about short stature or cognitive delay, which often helps narrow down a diagnosis — “we can reach up to 88 percent diagnostic accuracy for some conditions.”

Apple, Amazon, and Google have all teamed up with the medical community to try to develop a host of diagnostic tools using the technology. At some point, these companies may know more about your health than you do. Questions abound: Who owns this information, and how will it be used?

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

As the media frenzy around Kosinski’s work has died down over the last few weeks, he is gearing up next to explore whether the same technology can predict political preferences based on facial characteristics. But wouldn’t this just aggravate concerns about discrimination and privacy violations?

“I don’t think so,” he says. “This is the same argument made against our other study.” He then reveals his true goal: “In the long term, instead of fighting technology, which is just providing us with more accurate information, we need solutions to the consequences of having that information. . . like more tolerance and more equality in society,” he says. “The sooner we get down to fixing those things, the better we’ll be able to protect people from privacy or discrimination issues.”

In other words, instead of raging against the facial-recognition machines, we might try to sort through our inherent human biases instead. That’s a much more complex problem that no known algorithm can solve.

**Originally published in the Boston Globe**

Written by Amitha


Website: