When our devices can read our emotions

March 18, 2019
AI is helping machines recognize our moods. Affectiva’s Gabi Zijderveld says standards are needed to guard against misuse.

This is an episode of “Business Lab,” MIT Technology Review’s new podcast that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Personal assistants like Siri, Alexa, Cortana, or Google Home can parse our spoken words and (sometimes) respond appropriately, but they can’t gauge how we’re feeling—in part because they can’t see our faces. But in the emerging field of “emotion-tracking AI,” companies are studying the facial expressions captured by our devices’ cameras to allow software of all kinds become more responsive to our moods and cognitive states.

At Affectiva, a Boston startup founded by MIT Media Lab researchers Rosalind Picard and Rana El Kaliouby, programmers have trained machine learning algorithms to recognize our facial cues and determine whether we’re enjoying a video or getting drowsy behind the wheel. Gabi Zijderveld, Affectiva’s chief marketing officer and head of product strategy, tells Business Lab that such software can streamline marketing, protect drivers, and ultimately make all our interactions with technology deeper and more rewarding. But to guard against the potential for misuse, she says, Affectiva is also lobbying for industry-wide standards to make emotion-tracking systems opt-in and consensual.

Top image credit: Daniil Peshkov/123RF