This paper introduces a methodology based on deep convolutional neural networks (DCNN) for motor imagery (MI) tasks recognition in the brain-computer interface (BCI) system. More specifically, the DCNN is used for classification of the right hand and right foot MI-tasks based electroencephalogram (EEG) signals. The proposed method first transforms the input EEG signals into images by applying the time-frequency (T-F) approaches. The used T-F approaches are short-time-Fourier-transform (STFT) and continuous-wavelet-transform (CWT). After T-F transformation the images of MI-tasks EEG signals are applied to the DCNN stage. The pre-trained DCNN model, AlexNet is explored for classification. The efficiency of the proposed method is evaluated on IVa dataset of BCI competition-III. The evaluation metrics such as accuracy, sensitivity, specificity, F1-score, and kappa value are used for measuring the proposed method results quantitatively. The obtained results show that the CWT approach yields better results than the STFT approach. In addition, the proposed method obtained 99.35% accuracy score is the best one among the existing methods accuracy scores.
This paper's license is marked as closed access or non-commercial and cannot be viewed on ResearchHub. Visit the paper's external site.