AAAI Publications, Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence

Font Size: 
Knowledge-Driven Feed-Forward Neural Network for Audio Affective Content Analysis
Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu

Last modified: 2018-06-20

Abstract


Machine learning techniques have shown great promises across domains, however they fail to impress when there is scarcity of training data. Work in the area of affective content analysis can not take complete advantage of machine learning techniques when there is a lack of sufficient training data. It is well known that recurrent neural networks (RNNs), particularly with long-short term memory (LSTM) units, perform better than feed-forward neural networks (FFNNs) on sequential data as they are architecturally designed to learn temporal relationships existing in the training data while FFNNs are not. But RNNs require sufficient training data to learn these temporal relationships. In this paper, we show that one can take advantage of a-priori knowledge about the temporal correlations in the training data even in a FFNN architecture. We call this the knowledge-driven FFNN or k-FFNN architecture. We show using the MediaEval dataset that the k-FFNN model not only outperforms FFNN, but also performs better than RNN models (i.e., Simple RNN, RNN with LSTM units and bi-directional RNN with LSTM units (BLSTM)), especially when the amount of training data is sparse.

Keywords


Affective content analysis; audio emotions; a-priori knowledge; feed-forward neural network; recurrent neural network

Full Text: PDF