A Baseline Electroencephalography Motor Imagery Brain-Computer Interface System Using Artificial Intelligence and Deep Learning
##plugins.themes.bootstrap3.article.main##
This paper presents a baseline or reference (single channel, single subject, single trial) electroencephalography (EEG) motor imagery (MI) brain computer interface (BCI) that harnesses deep learning artificial neural networks (ANNs) for brainwave signal classification. The EEG electrode or sensor is placed on the scalp within the frontal lobe of the right hemisphere of the brain and approximately above the motor cortex. Signal classification discriminates among three MI classes, namely, right first closed event, neutral event and left first closed event and the measured accuracy of the deep learning ANN was 83% which significantly outperforms chance classification. The effectiveness of the system is demonstrated by applying it to the navigation of a virtual environment, specifically, immersive 360-degree panoramas in equirectangular projection.
Downloads
Introduction
In recent times, brain-computer interface (BCI) prototypes based on electroencephalography (EEG) and the motor imagery (MI) paradigm have become ubiquitous [1]–[11]. These BCIs offer advantages including affordability and well-established protocols and hold out the promise of higher usability in practice.
A plethora of signal classification systems and algorithms have been developed and reported in the literature. These include systems based on principal component analysis (PCA), support vector machines (SVMs), independent component analysis (ICA), linear discriminant analysis (LDA) and their variants [12]–[16]. Other approaches involve the generation of artificial intelligence (AI) models based on a storied range of artificial neural networks (ANN) coupled with a very wide variety of learning methods including deep learning algorithms [17]–[21].
Existing methods typically employ a plurality of EEG electrodes with those systems harnessing a higher number of electrodes generally judged to yield better performance.
Consequently, there exists a need for minimal, cost-effective and functional BCI systems based on the EEG and in particular the motor imagery paradigm that could serve as baseline or reference systems.
This work presents just such a minimal, cost-effective and functional baseline EEG motor imagery BCI with application to environment navigation.
Fig. 1 outlines the design of the system and highlights its application to the generation of navigation commands for the control of a virtual environment.
The commands gleaned from the classification outcomes of the trained ANN model could also be harnessed in the control of external devices.
Materials and Methods
Participant Recruitment
Healthy participants without any history of neurological conditions volunteered to take part in this study. All participants gave informed consent and were apprised of the benefits of the research. The data analyzed in this paper were recorded from one adult male participant.
Ethical Approval
The studies were approved by the Research Ethics Committee at Topfaith University and were conducted in compliance with all applicable ethical and regulatory requirements.
Electroencephalography (EEG) Brainwave Data Acquisition
Raw EEG data streams were captured using the Emotiv EPOC Wireless EEG headset. This headset has been validated for research-grade brainwave data recordings [22]–[25]. For both training data acquisition and online application, participants wore an Emotiv EPOC Wireless headset (with the electrode, F3, positioned as depicted in Fig. 2, the motor cortex of the frontal lobe within the left hemisphere of the brain believed to be involved in the generation of signals for movement) while sitting in a relaxed position in a calm environment. Readings from Emotiv EPOC electrodes are reference-compensated.
During training data capture, participants performed designated motor imagery activities in response to automatically generated prompts. In contrast, during online utilization, participants were free to carry out any motor imagery activity and the system would detect those activities or events that it had been trained on.
All data capture operations including the generation of automatic prompts and recording of data streams from the Emotiv EPOC Wireless device were coordinated using a 64-bit Python application named EEG Data Studio that was custom developed for this purpose.
This study delineates three classes of motor imagery activities or events, namely, Left Fist Closed, Neutral (both fists open and relaxed) and Right Fist Closed.
In order to capture training data, EEG Data Studio prompts the participant by displaying a specific trigger pattern for one second during which the participant would perform the designated motor imagery activity and EEG Data Studio records the corresponding EEG brainwave data stream as an input vector containing as many elements as there are data samples in one second of EEG data and a matching three-element output vector, clearing the display for a break in recording for the next two seconds for the participant to relax and then repeating the process until the desired number of samples are recorded.
For the Left Fist Closed motor imagery activity, EEG Data Studio displays a square at the center of the left half of the screen as shown in Fig. 3 and the participant closes only the left fist. For the Neutral motor imagery activity, EEG Data Studio displays a circle at the center of the screen as depicted in Fig. 4 and the participant relaxes without closing any fist while for the Right Fist Closed motor imagery activity, EEG Data Studio displays a square at the center of the right half of the screen as illustrated in Fig. 5 and the participant closes only the right fist.
The matching three-element output vectors for the three separate output classes using a one-hot encoding configuration are:
Left Fist Closed: | ➜ | 1 | 0 | 0 |
Neutral: | ➜ | 0 | 1 | 0 |
Right Fist Closed: | ➜ | 0 | 0 | 1 |
Since the sampling frequency of the Emotiv EPOC Wireless headset is 128 samples per second, each recorded input vector representing one second of EEG data contains 128 elements or data points.
Three hundred seconds of raw EEG data were collected with one hundred seconds of data recordings for each of the Left Fist Closed, Neutral and Right Fist Closed motor imagery tasks. There were 300 input vectors (one vector for each second of recording and 100 vectors for each class with 128 elements each). Accordingly, a total of 38,400 (300 seconds multiplied by 128 samples per second) input EEG data samples were recorded. Consequently, the recording session lasted 900 seconds (15 minutes) since each one-second recording was followed by a break of two seconds before the next recording.
In summary, the input data comprised 300 vectors each containing 128 elements with values representing the raw EEG voltages measured at the electrode in microvolts. The Emotiv EPOC data is reference-compensated. Raw EEG data were analyzed without preprocessing to avoid distortions as the Artificial Neural Network (ANN) harnessed could discriminate between the relevant features. The output data comprised 300 vectors each containing 3 elements corresponding to the specific motor imagery task performed by the participant when the input data was measured. As indicated earlier, the output vector was {1 0 0} for the Left Fist Closed task, {0 1 0} for the Neutral task and {0 0 1} for the Right Fist Closed task.
Data Availability
The data (including raw EEG data) that support the findings of this study are available from GitHub at https://github.com/frankekpar/eeg_motor_imagery_bci/blob/main/dataset.zip.
Artificial Neural Network (ANN) Architecture
TensorFlow Framework was harnessed in conjunction with the Keras Application Programming Interface (API) to build, train and deploy the deep learning artificial neural network (ANN) models in the Python programming language [26], [27]. Fundamentally structured as a multi-layer perceptron, the deep learning ANN comprised an input layer with 128 units corresponding to the number of elements in an input vector, two hidden layers each with 512 units and an output layer with 3 units corresponding to the number of elements in an output vector. Network connectivity was characterized by dense sequential layers. Rectified Linear Units (reLU) were utilized throughout except for the output layer where sigmoid activation units were utilized. Fig. 6 is a graphical illustration of the ANN architecture. This architecture was selected after extensive experimentation and was motivated by the fact that a single channel of EEG data was analyzed.
Application to Virtual Environment Navigation
360-degree panoramas find applications in a plethora of fields ranging from medicine to virtual and augmented reality [28]–[31] and present excellent virtual environments for immersive user experiences. By wrapping a 360-degree panorama as a texture around a sphere and using a three-dimensional (3D) rendering framework such as OpenGL for tessellation of the sphere via triangle strips or other suitable primitives as illustrated in Figs. 7 and 8, perspective-corrected views of regions of interest within the sphere can be generated for seamless navigation of the environment represented by the 360-degree panorama.
In order to effect navigation of the 360-degree panorama, the pan, tilt and zoom (scale) parameters need to be specified. Given the angle representing the pan as θ, the angle representing the tilt as φ and the radius of the sphere used in the 3D rendering framework as r, Fig. 9 illustrates the computation of the 3D cartesian coordinates of the point indicated by the navigation parameters.
As shown in Fig. 9, the computation of the 3D Cartesian coordinates x, y and z of the point with 3D spherical coordinates r, φ and θ can be facilitated by introducing the symbol k to represent the hypotenuse of the right-angled triangle formed by the sides x and z on the X-Z coordinate plane.
Consequently, two right-angled triangles stand out—one with known acute angle φ, hypotenuse r, opposite side y and adjacent side k; the other with known acute angle θ, hypotenuse k, opposite side z and adjacent side x.
The following equations express the coordinate transformation using the symbols in Fig. 9.
Note that θ ranges from 0 to 2π radians while φ ranges from −π/2 to π/2 radians. Equations (1)–(4) are applied in the 3D rendering of the panorama in the OpenGL framework.
Actual navigation of the 360-degree panorama is depicted in Fig. 10 and involves performing a perspective projection to a view window with a given width (W) and height (H) in the U-V coordinate system corresponding to the pan (θ0), tilt (φ0) and zoom or scale factor (λ) navigation parameters while performing the coordinate transformation described in the foregoing.
Navigation of the 360-degree via thought using this baseline BCI is accomplished by using the trained ANN model to detect the EEG brainwave data signals corresponding to the Left Fist Closed or Right Fist Closed motor imagery activities and decreasing or increasing the value of the pan (θ0) navigation parameter by a specified quantum depending on the type of motor imagery activity detected or leaving it unchanged if the Neutral motor imagery activity or any other unrecognized activity is detected. The exact value of the increment or decrement is made responsive to the speed of the computer system on which the BCI runs and the desired smoothness of the navigation.
Results
First, the aggregated input and corresponding output vectors were shuffled at random to improve balance and reduce bias. Next, the input vectors were extracted and preprocessed using the min-max scaler in the Scikit-learn Machine Learning Python Toolkit to improve processing efficiency. There was no need to preprocess the output vectors since they had values comprising 1 s and 0 s exclusively. Finally, the data was split into two datasets, a training dataset with 70% of the data and a test or validation dataset with the remaining 30% of the data.
Trained on the training dataset with the Adam deep learning optimization algorithm [32], [33], a default learning rate of 0.001 and a batch size of 32 over a total of 300 epochs with categorical cross-entropy loss function, the ANN achieved a classification accuracy of approximately 83% on both the training and test or validation datasets—vastly outperforming chance classification and demonstrating the effectively of the system.
To navigate 360-degree panoramic images through motor imagery activity and essentially by thought alone, the trained ANN model was used to classify EEG data streams captured as described in this paper and the output of the classification was transformed into an increment operation on the value of the pan navigation parameter for the Right Fist Closed motor imagery activity, a decrement operation on the parameter for the Left Fist Closed motor imagery activity and a retention of the current value of the parameter for the Neutral motor imagery activity or any other unrecognized activity. Perspective-corrected views of the panorama were generated on the basis of the pan navigation parameter (initially set to 0 and modified in accordance with the detected motor imagery activity), a fixed tilt navigation parameter (default set to 0) and a default zoom or scale factor using the coordinate transformations and perspective projection described herein. The tilt and zoom navigation parameters could be set using the mouse or keyboard.
Fig. 11 is a sample 360-degree panorama in equirectangular projection while Figs. 12 and 13 depict perspective-corrected views of the panorama with different navigation parameters.
Discussion
During training and validation dataset acquisition, participants are prompted to actually close the left or right fist with the recording of the EEG data stream commencing at the time of the display of the prompt on the screen which precedes the actual motor activity, implying that the recorded EEG signals encompass signals emitted during the motor imagery thought processes preceding the actual closure of the fist. In online applications of the resulting BCI such as in the navigation of immersive environments represented by 360-degree panoramic images, participants simply imagine closing the left or right fist.
The EEG electrode is placed above the motor cortex within the frontal lobe of the brain as this region is understood to be associated with motion and motor imagery activity [34]. In this study, the EEG device was placed above the left hemisphere of the brain. Alternative placements could be explored in the future.
Conclusion
This paper introduced a minimal electroencephalography (EEG) motor imagery brain computer interface (BCI) based on deep learning artificial neural networks (ANNs). The effectiveness of the system was demonstrated by applying it to the navigation of immersive 360-degree panoramic images. This BCI could also be utilized for the control of external devices as well as in virtual and augmented reality as illustrated in this paper as the resultant ANN significantly outperformed chance classification. This work could serve as a reference or baseline or benchmark BCI for further research and development.
References
-
Padfield N, Zabalza J, Zhao H, Masero V, Ren J. EEG-based brain-computer interfaces using motor-imagery: techniques and challenges. Sensors. 2019;19(6):1423.
DOI | Google Scholar
1
-
Kevric J, Subasi A. Comparison of signal decomposition methods in classification of EEG signals for motor-imagery BCI system. Biomed Signal Process Control. 2017;31:398–406.
DOI | Google Scholar
2
-
Cho H, Ahn M, Ahn S, Kwon M, Jun SC. EEG datasets for motor imagery brain-computer interface. GigaScience. 2017;6(7):1–8.
DOI | Google Scholar
3
-
Arpaia P, Esposito A, Natalizio A, Parvis M. How to successfully classify EEG in motor imagery BCI: a metrological analysis of the state of the art. J Neural Eng. 2022;19(3):1741–2552.
DOI | Google Scholar
4
-
Kaya M, Binli MK, Ozbay E, Yanar H, Mishchenko Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci Data. 2018;5:180211.
DOI | Google Scholar
5
-
Tibrewal N, Leeuwis N, Alimardani M. Classification of motor imagery EEG using deep learning increases performance in inefficient BCI users. PLoS One. 2022;17(7):e0268880.
DOI | Google Scholar
6
-
Sreeja SR, Rabha J, Nagarjuna KY, Samanta D, Mitra P, Sarma M. Motor imagery EEG signal processing and classification using machine learning approach. IEEE International Conference on New Trends in Computing Sciences (ICTCS), pp. 61–6, 2017.
DOI | Google Scholar
7
-
Das K, Pachori RB. Electroencephalogram-based motor imagery brain-computer interface using multivariate iterative filtering and spatial filtering. IEEE Trans Cogn Dev Syst. 2022;15(3):1408–18.
DOI | Google Scholar
8
-
Velasco I, Sipols A, Simon De Blas C, Pastor L, Bayona S. Motor imagery EEG signal classification with a multivariate time series approach. Biomed Eng Online. 2023;22:29.
DOI | Google Scholar
9
-
Yang A, Lam HK, Ling SH. Multi-classification for EEG motor imagery signals using data evaluation-based auto-selected regularized FBCSP and convolutional neural network. Neural Comput Appl. 2023;35:12001–27.
DOI | Google Scholar
10
-
Venkatachalam K, Devipriya A, Maniraj J, Sivaram M, Ambikapathy A, Amiri IS. A novel method of motor imagery classification using eeg signal. Artif Intell Med. 2020;103:101787.
DOI | Google Scholar
11
-
Subasi A, Gursov MI. EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Syst Appl. 2010;37(12):8659–66.
DOI | Google Scholar
12
-
Razzak I, Hameed IA, Xu G. Robust sparse representation and multiclass support matrix machines for the classification of motor imagery EEG signals. IEEE J Transl Eng Health Med. 2019;7:1–8.
DOI | Google Scholar
13
-
Pahuja SK, Veer K. Recent approaches on classification and feature extraction of EEG signal: a review. Robotica. 2022;40(1):77–101.
DOI | Google Scholar
14
-
Lekshmi SS, Selvam V, Rajasekaran MP. EEG signal classification using principal component analysis and wavelet transform with neural network. IEEE International Conference on Communication and Signal Processing, pp. 687–90, 2014.
DOI | Google Scholar
15
-
Lugger K, Flotzinger D, Schlögl A, Pregenzer M, Pfurtscheller G. Feature extraction for online EEG classification using principal components and linear discriminants. Med Biol Eng Comp. 1998;36(3):309–14.
DOI | Google Scholar
16
-
Lawhern VJ, Solon AJ, Waytowich NR, Gordon SM, Hung CP, Lance BJ. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. J Neural Eng. 2018;15(5):056013.
DOI | Google Scholar
17
-
Al-Saegh A, Dawwd SA, Abdul-Jabbar JM. Deep learning for motor imagery EEG-based classification: a review. Biomed Signal Process Control. 2021;63:102172.
DOI | Google Scholar
18
-
Tabar YR, Halici U. A novel deep learning approach for classification of EEG motor imagery signals. J Neural Eng. 2016;14:016003.
DOI | Google Scholar
19
-
Amin SU, Alsulaiman M, Muhammad G, Mekhtiche MA, Hossain MS. Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion. Future Gener Comput Syst. 2019;101:542–54.
DOI | Google Scholar
20
-
Ma J, Yang B, Qiu W, Li Y, Gao S, Xia X. A large EEG dataset for studying cross-session variability in motor imagery brain-computer interface. Sci Data. 2022;9:531.
DOI | Google Scholar
21
-
Badcock NA, Preece KA, De Wit B, Glenn K, Fieder N, Thie J, et al. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children. PeerJ. 2015;3:e907. doi: 10.7717/peerj.907.
DOI | Google Scholar
22
-
Williams NS, McArthur GM, Badcock NA. It’s all about time: precision and accuracy of emotiv event-marking for ERP research. PeerJ. 2021;9:e10700. doi: 10.7717/peerj10700 .
DOI | Google Scholar
23
-
Badcock NA, Mousikou P, Mahajan Y, De Lissa P, Thie J, McArthur G. Validation of the Emotiv EPOC EEG gaming system for measuring research quality auditory ERPs. PeerJ. 2013;1:e38. doi: 10.7717/peerj.38.
DOI | Google Scholar
24
-
Choong WY, Khairunizam W, Mustafa WA, Murugappan M, Hamid A, Bong SZ, et al. Correlation analysis of emotional EEG in alpha, beta and gamma frequency bands. J Phys: Conf Ser. 2021;1997:012029.
DOI | Google Scholar
25
-
Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow:Asystem for large scalemachine learning. Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16), 2016.
Google Scholar
26
-
Pang B, Nijkamp E, Wu YN. Deep learning with TensorFlow: a review. J Educ Behav Stat. 2019;45:227–48.
DOI | Google Scholar
27
-
Nguyen CQ, Khanna S, Dwivedi P, Huang D, Huang Y, Tasdizen T, et al. Using google street view to examine associations between built environment characteristics and U.S. health outcomes. Prev Med Rep. 2019;14:100859.
DOI | Google Scholar
28
-
Ekpar FE. Method and apparatus for creating interactive virtual tours, United States Patent Number 7,567,274, 2009.
Google Scholar
29
-
Ekpar FE, Yamauchi S. Panoramic image navigation system using neural network for correction of image distortion, United States Patent Number 6,671,400, 2003.
Google Scholar
30
-
Ekpar FE, Yoneda M, Hase H. Correcting distortions in panoramic images using constructive neural networks. Int J Neural Syst. 2003;13(4):239–50.
DOI | Google Scholar
31
-
Kingma DP, Ba JL. Adam: a method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015.
Google Scholar
32
-
Zhang Z. Improved adam optimizer for deep neural networks. IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), pp. 1–2, 2018.
DOI | Google Scholar
33
-
Stuss DT, Knight RT. Principles of Frontal Lobe Function. Oxford University Press; 2012.
DOI | Google Scholar
34
Most read articles by the same author(s)
-
Frank Edughom Ekpar,
A Framework for Interactive Virtual Tours , European Journal of Electrical Engineering and Computer Science: Vol. 3 No. 6 (2019) -
Frank Edughom Ekpar,
System for Nature-inspired Signal Processing: Principles and Practice , European Journal of Electrical Engineering and Computer Science: Vol. 3 No. 6 (2019) -
Frank Edughom Ekpar,
A Comprehensive Artificial Intelligence-Driven Healthcare System , European Journal of Electrical Engineering and Computer Science: Vol. 8 No. 3 (2024) -
Frank Edughom Ekpar,
Novel System for Composing High Impact Special Effects , European Journal of Electrical Engineering and Computer Science: Vol. 6 No. 1 (2022)