We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We ﬁrst present meticulous annotation of laughters, cross-talks and environmental noise in an audio-facial database with explicit 3D facial mocap data. Using this annotated database, we rigorously investigate the utility of facial information, head movement and audio features for laughter detection. We identify a set of discriminative features using mutual information-based criteria, and show how they can be used with classiﬁers based on support vector machines (SVMs) and time delay neural networks (TDNNs). Informed by the analysis of the individual modalities, we propose a multimodal fusion setup for laughter detection using different classiﬁer-feature combinations. We also effectively incorporate bagging into our classiﬁcation pipeline to address the class imbalance problem caused by the scarcity of positive laughter instances. Our results indicate that a combination of TDNNs and SVMs lead to superior detection performance, and bagging effectively addresses data imbalance. Our experiments show that our multimodal approach supported by bagging compares favorably to the state of the art in presence of detrimental factors such as cross-talk, environmental noise, and data imbalance.
Index Terms—Laughter detection, naturalistic dyadic conversations, facial mocap, data imbalance
Authors: B. B. Türker, Y. Yemez, T. M. Sezgin, E. Erzin.