Hierarchical Deep Feature Learning for Decoding Imagined Speech from EEG

Authors

  • Pramit Saha University of British Columbia
  • Sidney Fels University of British Columbia

DOI:

https://doi.org/10.1609/aaai.v33i01.330110019

Abstract

We propose a mixed deep neural network strategy, incorporating parallel combination of Convolutional (CNN) and Recurrent Neural Networks (RNN), cascaded with deep autoencoders and fully connected layers towards automatic identification of imagined speech from EEG. Instead of utilizing raw EEG channel data, we compute the joint variability of the channels in the form of a covariance matrix that provide spatio-temporal representations of EEG. The networks are trained hierarchically and the extracted features are passed onto the next network hierarchy until the final classification. Using a publicly available EEG based speech imagery database we demonstrate around 23.45% improvement of accuracy over the baseline method. Our approach demonstrates the promise of a mixed DNN approach for complex spatialtemporal classification problems.

Downloads

Published

2019-07-17

How to Cite

Saha, P., & Fels, S. (2019). Hierarchical Deep Feature Learning for Decoding Imagined Speech from EEG. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 10019-10020. https://doi.org/10.1609/aaai.v33i01.330110019

Issue

Section

Student Abstract Track