f-Similarity Preservation Loss for Soft Labels: A Demonstration on Cross-Corpus Speech Emotion Recognition

Authors

  • Biqiao Zhang University of Michigan
  • Yuqing Kong Peking University
  • Georg Essl University of Wisconsin, Milwaukee
  • Emily Mower Provost University of Michigan

DOI:

https://doi.org/10.1609/aaai.v33i01.33015725

Abstract

In this paper, we propose a Deep Metric Learning (DML) approach that supports soft labels. DML seeks to learn representations that encode the similarity between examples through deep neural networks. DML generally presupposes that data can be divided into discrete classes using hard labels. However, some tasks, such as our exemplary domain of speech emotion recognition (SER), work with inherently subjective data, data for which it may not be possible to identify a single hard label. We propose a family of loss functions, fSimilarity Preservation Loss (f-SPL), based on the dual form of f-divergence for DML with soft labels. We show that the minimizer of f-SPL preserves the pairwise label similarities in the learned feature embeddings. We demonstrate the efficacy of the proposed loss function on the task of cross-corpus SER with soft labels. Our approach, which combines f-SPL and classification loss, significantly outperforms a baseline SER system with the same structure but trained with only classification loss in most experiments. We show that the presented techniques are more robust to over-training and can learn an embedding space in which the similarity between examples is meaningful.

Downloads

Published

2019-07-17

How to Cite

Zhang, B., Kong, Y., Essl, G., & Provost, E. M. (2019). f-Similarity Preservation Loss for Soft Labels: A Demonstration on Cross-Corpus Speech Emotion Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5725-5732. https://doi.org/10.1609/aaai.v33i01.33015725

Issue

Section

AAAI Technical Track: Machine Learning