Advacnes in AI and computer vision portend worrisome privacy implications. Image via Frank Baron for the Guardian

When it Comes to Facial Features, Sexual Orientation Is Only the Worrisome Beginning

In the wake of a recent study that garnered a flurry of international media attention and protests from human rights organizations after it was revealed that AI systems could determine a person’s sexual orientation with startling accuracy based only upon photos of their face, Stanford University professor Michal Kosinski suggests that other intimate details of one’s personality and perhaps even fundamental markers of their health and inherent traits or abilities can also be gleaned through advances in AI. Traits ranging from political predictions to IQ and even predisposition to criminality have been speculated as possible targets for such systems.

With last week’s study on sexual orientation already raising fears of states, corporations, and individuals using similar technology to persecute or blackmail individuals, could AI systems reign in an era of technological physiognomy and genetic determinism where people are classified, evaluated, and even targeted at a single digital glance?

The Guardian reports:

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”