Teaching Computers to Hear Emotion


Listen to the podcast above, as Steven Cherry introduces the topic and then interviews professor Wendi Heinzelman.

Listen carefully, because it’s short:

“March 21st”

That person sounded sad, right? Let’s try another one.

“Six hundred five!”

Definitely not sadness. Could you tell? It was pride. Listen again.

“Six hundred five!”


How can we hear emotions? It’s not what people are saying—here it’s just numbers, a date in one case—it’s how they’re saying it. And we can all do this. In fact, not being able to do this is a psychological disability. Well, if we can do it reliably, the next step—this is the Information Age, after all—is teaching a computer to do it too.


If you keep listening, you’ll notice the conversation gets detailed further into the interview. You can click on spectrum.ieee.org for the full transcript.

How might we use Emotion Detection?

There are many applications, including improving the human-machine interface. For example, a telephone voice response unit might notice that caller shows frustration after learning that she’s talking to a machine. It could then transfer her to a real operator, or better yet adjust its own tone to show empathy.

When interacting with patients or seniors living in isolation, a computer system could detect pain, depression or other medical conditions so remedies can be applied. In the following video, a robot recognizes and mirrors the mood of its human partner in a conversation. The example is crude, but the implication for the future is profound.

Comments on “Teaching Computers to Hear Emotion

Comments are closed.