I successfully defended my doctoral thesis on “A Bayesian Theory of Mind Approach to Nonverbal Communication for Human-Robot Interactions: A Computational Formulation of Intentional Inference and Belief Manipulation.”
A BIG thank you to my thesis committee members, colleagues, family, and friends for teaching me everything I know, celebrating with me in my victories, and supporting me with endless hugs.
Here is the PhD thesis document.
Abstract: Much of human social communication is channeled through our facial expressions, body language, gaze directions, and many other nonverbal behaviors. A robot’s ability to express and recognize the emotional states of people through these nonverbal channels is at the core of artificial social intelligence. The purpose of this thesis is to define a computational framework to nonverbal communication for human-robot interactions. We address both sides to nonverbal communication, the decoding and encoding of social-emotional states through nonverbal behaviors, and also demonstrate their shared underlying representation.
We use our computational framework to model engagement/attention in storytelling interactions. Storytelling is an interaction form that is mutually regulated between storytellers and listeners where a key dynamic is the back-and-forth process of speaker cues and listener responses. Listeners convey attentiveness through nonverbal backchannels, while storytellers use nonverbal cues to elicit this feedback.
We demonstrate that storytellers employ plans, albeit short, to influence and infer the attentive state of listeners using these speaker cues. We computationally model the intentional inference of storytellers as a planning problem of getting listeners to pay attention. When accounting for this intentional context of storytellers, our attention estimator outperforms current state-of-the-art approaches to emotion recognition.
By formulating emotion recognition as a planning problem, we apply a recent artificial intelligence method of inverting planning models to perform belief inference. We computationally model emotion expression as a combined process of estimating a person’s beliefs through inference inversion and then producing nonverbal expressions to affect those beliefs. We demonstrate that a robotic agent operating under our belief manipulation paradigm more effectively communicates an attentive state compared to current state-of-the-art approaches that cannot dynamically capture how the robot’s expressions are interpreted by the human partner.