Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off
Metadatos
Mostrar el registro completo del ítemAutor
Moleón Moya, Javier; Escobar Pérez, Juan José; Ortiz, Andrés; Ortega Lopera, Julio; González Peñalver, Jesús; Martín Smith, Pedro Jesús; Gan, John Q.; Damas Hermoso, MiguelEditorial
Plos One
Fecha
2020-06-11Referencia bibliográfica
León J, Escobar JJ, Ortiz A, Ortega J, González J, Martín-Smith P, et al. (2020) Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off. PLoS ONE 15(6): e0234178. https://doi.org/10.1371/journal.pone.0234178
Patrocinador
Grant number PGC2018-098813-B-C31 (Spanish Ministerio de Ciencia, Innovacio´n y Universidades); Grant numbers PGC2018-098813-B-C32 and PSI2015-65848-R (Spanish Ministerio de Ciencia, Innovación y Universidades)Resumen
Electroencephalography (EEG) datasets are often small and high dimensional, owing to
cumbersome recording processes. In these conditions, powerful machine learning techniques
are essential to deal with the large amount of information and overcome the curse of
dimensionality. Artificial Neural Networks (ANNs) have achieved promising performance in
EEG-based Brain-Computer Interface (BCI) applications, but they involve computationally
intensive training algorithms and hyperparameter optimization methods. Thus, an awareness
of the quality-cost trade-off, although usually overlooked, is highly beneficial. In this
paper, we apply a hyperparameter optimization procedure based on Genetic Algorithms to
Convolutional Neural Networks (CNNs), Feed-Forward Neural Networks (FFNNs), and
Recurrent Neural Networks (RNNs), all of them purposely shallow. We compare their relative
quality and energy-time cost, but we also analyze the variability in the structural complexity
of networks of the same type with similar accuracies. The experimental results show
that the optimization procedure improves accuracy in all models, and that CNN models with
only one hidden convolutional layer can equal or slightly outperform a 6-layer Deep Belief
Network. FFNN and RNN were not able to reach the same quality, although the cost was
significantly lower. The results also highlight the fact that size within the same type of network
is not necessarily correlated with accuracy, as smaller models can and do match, or
even surpass, bigger ones in performance. In this regard, overfitting is likely a contributing
factor since deep learning approaches struggle with limited training examples.