Audio-Visual Prediction of Head-Nod and Turn-Taking Events in Dyadic Interactions

Head-nods and turn-taking both significantly contribute conversational dynamics in dyadic interactions. Timely prediction and use of these events is quite valuable for dialog management systems in human-robot interaction. In this study, we present an audio-visual prediction framework for the head-nod and turntaking events that can also be utilized in real-time systems. Prediction systems based on Support Vector Machines (SVM) and Long Short-Term Memory Recurrent Neural Networks (LSTMRNN) are trained on human-human conversational data. Unimodal and multi-modal classification performances of head-nod and turn-taking events are reported over the IEMOCAP dataset.

Index Terms: head-nod, turn-taking, social signals, event prediction, dyadic conversations, human-robot interaction

Authors: B. B. Turker, E. Erzin, Y. Yemez and M. Sezgin

Read the full paper.

Leave a Reply

Your email address will not be published. Required fields are marked *