Different Methods of EEG Signal Analysis Using Power Spectral Density, ChronoNet and ResNest

— Brain diseases like epilepsy can be identified using electroencephalograms (EEGs). Automated EEG data processing has the potential to improve patient care because manual interpretation requires a lot of time, resources, and money. The paper emphasizes that EEG data can be used to detect intellectual ability and human brain-related diseases like epilepsy. We propose one method ResNest for estimating Intelligence Quotient and two methods (Power spectral density (PSD) and ChronoNet) for detecting epilepsy. The first approach involves utilizing the datasets found at zenodo organization with 5-fold cross validation using the Welch PSD for feature extraction and various classifiers (Kernel SVM, Naive Bayes, Random Forest, Decision Tree). It was found that 99.1% accuracy could be achieved using Kernel SVM. We also propose ChronoNet, a cutting-edge architecture for recurrent neural networks that has been constructed using the Keras framework. It achieves an accuracy of roughly 98.89% by utilizing the data found at Temple University Hospital EEG corpus. The required datasets from Kaggle have been utilized in conjunction with the ResNest method. Using ResNest50d (Epochs=10), we achieved a maximum accuracy of 91%.


I. INTRODUCTION
Situations that pose a major risk to one's life can arise as a result of an epileptic seizure.A neurological disease called epilepsy is brought on by the brain neurons' aberrant discharge.By examining the encephalogram (EEG) signals, which are the most important tool for the diagnosis of epilepsy because they reveal how epileptic seizures manifest as distinct, typically rhythmic signals that frequently precede or coincide with the initial observed changes in behavior, people affected with epilepsy can have their conditions diagnosed, seizures can be recognized, and treatment can be started right away.In order to respond to a seizure that is about to occur or is currently occurring, or to differentiate epileptic seizures from other disorders that have paroxysmal, seizure-like symptoms, their detection can be used.
Due to the immense labor involved in identifying seizures by human experts and the large number of epilepsy patients, the development of computerized seizure detection methods has undergone various efforts.Because of this, machine Submitted on June 17, 2023.Published on October 03, 2023.Md M. Hasan, Rajshahi University of Engineering & Technology, Bangladesh (corresponding e-mail: mehedi.hasan28.bd@gmail.com)S. Rahman, Ahsanullah University of Science and Technology, Bangladesh (e-mail: ) learning algorithms that interpret EEG automatically have become more and more popular recently.In [1], four methods are used to identify seizures, with the best classification results being produced by random forest (RF), decision tree (DT) algorithm C4.5, SVM+RF, and SVM+C4. 5. Some studies utilize estimated entropy and sample entropy collected by WPD as features, and SVM and extreme learning machine as classifiers, to identify epileptic seizures [2].In [3], the number of variables is further decreased via WPD and kernel PCA (KPCA), and the Takagi-Sugeno-Kang (TSK) fuzzy logic system is utilized as the classifier.Deep learning has attracted a lot of attention recently in the area of feature learning and is quickly becoming a powerful machine learning paradigm [4]- [6].With increasing network depth, deep learning model performance almost reached a plateau.The term "phenomenon" refers to network degradation.The residual learning framework, which allows networks to continue converge and attain improved accuracy even when the number of layers grew, can be used to address this issue [6], [8].In this study, residual learning was used to examine how well deeper networks performed when dealing with raw EEG information [7], [8].The attention mechanism, which was found in human vision, has grabbed researchers' interest [9].We introduce a novel ResNet version called Split-Attention Network by stacking many Split-Attention blocks in the ResNet fashion (ResNeSt).
This study suggests a thorough analysis to distinguish between healthy and epileptic patients using two approaches: one to extract features from the frequency domain using welch power spectral density and time series using ChronoNet (Keras framework).Another approach, ResNest to determine the subjects' comprehension level by examining their EEG signals.The first method involves categorizing epileptic seizures using features extracted in the frequency domain from the Welch power spectral density.The mean, standard deviation, minimum value, and maximum value of the epoch signal fluctuations were measured using the dataset for Nigerian people.These measurements were used to train a classifier (kernel SVM, Random Forest, Naive Bayes, Decision tree) to calculate its accuracy, loss, confusion matrix, sensitivity, and specificity.In terms of the Nigerian data, kernel SVM provides the highest levels of accuracy among the classifiers.We concentrated on the normal and epileptic classes of EEG data in this investigation.In the A. Sarkar, Rajshahi University of Engineering & Technology, Bangladesh (e-mail: asarkar@eee.ruet.ac.bd)F. Khan, Military Institute of Science and Technology, Bangladesh (e-mail: fayez.khan194@gmail.com)A. Seum, Ahsanullah University of Science and Technology, Bangladesh (e-mai: seum.cse@aust.edu)Nigerian data, we were able to attain a maximum accuracy of 99.1%, which is significantly higher than the accuracy of the relevant work [10].The dataset we used is the TUH Abnormal EEG Corpus, and ChronoNet is a novel recurrent neural network (RNN) architecture shown in our proposed work.The raw EEG time-series signal was the input for our RNN designs, which were inspired by achievements in timedomain signal classification.By combining concepts from 1dimensional convolution layers [11], gated recurrent units [12], inception modules [13], and densely linked networks [14], we developed a novel deep gated RNN called ChronoNet that outperforms in terms of accuracy (98.89%) by about 8.9% more, according to a related study [15].Four 1D convolution layers were employed in this work to achieve our goals.For the third strategy, data was gathered from the Kaggle website and used in conjunction with the ResNest method to determine an individual's capacity for abstract thought.We were able to get a maximum accuracy of 91% using ResNest50d (Epochs = 10), which is approximately 10% higher than the accuracy achieved in the related study [16].On the basis of accuracy, precision, epochs, and f1 score, comparison tables for the ResNest50d, ResNest14d, and ResNest26d models were presented.

A. Data Preparation
The first approach involves utilizing the datasets found at https://zenodo.org/record/1252141/files/EEGs.A total of 212 Nigerians took part in the research.The fourteen-channel EEG for the Nigerian data was set at a resolution of 16 bits and a sampling rate of 128 hertz The electrodes were positioned in accordance with the usual 10-20 system in the AF3, AF4, F3, F4, F7, F8 known as antero frontal, FC5, FC6 known as frontocentral, O1, O2 known as occipital, P7, P8 known as parietal, and T7, T8 known as temporal areas.The most important data lies in the range from 1 Hz to 30 Hz, so a bandpass filter was employed to pull those frequencies out (delta (0.5-1 Hz), delta (1-2 Hz), delta (2-4 Hz), theta (4-8 Hz), alpha (8)(9)(10)(11)(12)(13)(14)(15)(16), and beta (16-32 Hz)) [10].Two distinct groups, Epilepsy (subjects prone to epileptic seizures) and Control (heathy subject) were created from the dataset.The duration of the epochs was set to 10.The EEG signal and power spectral density of the first subject are shown in correspondence to the international 10-20 system-based electrode placement in Fig. 1.

B. Classifiers
In the analysis, input for the classification model was provided by features that correspond to different epochs within a person.The classification model receives feature extraction and input from the Welch Power Spectrum technique.The output parameters were compared across four classifiers (Kernel SVM, Random Forest, Decision Tree, Naive bayes).

1) The Support Vector Machine (SVM)
Supervised learning algorithms called vector models can categorize unlabeled data from labelled data [10].By employing the idea of decision planes or hyperplanes, it works by specifying decision boundaries.The features are divided into groups corresponding to various classes using hyperplanes.The Radial Basis Function (RBF) kernel was used, and its value was set to 1. SVM attempts to characterize the information by developing a function that divides the data points into their respective groups with the fewest defects and the largest (maximum) gap conceivable.

2) Random Forest (RF) and Decision Tree (DT)
Bagging, commonly referred to as Bootstrap Aggregation, is the ensemble method used by Random Forest.A form of ensemble approach that combines several decision tree forecasts is the Random Forests (RF) classifier.By choosing features at every junction, RF generates trees at irregular intervals.Tree choices for the strongest class are the ensemble's output.The random forest method is more resistant to mistakes and anomalies.As a result, Decision Tree suffers from the fitting problem issue, but Random Forest does not.

3) Naive Bayes
The Bayesian theorem serves as the foundation for the statistical classifier Naive Bayes (NB).The classifier is considered to as simplistic since it relies on the strong features independence assumption.The method used to calculate the probability of the target class is the key distinction between the various NB reported versions.There are several of these variations, including Simple Naive Bayes, Gaussian Naive Bayes (used in this work), Multinomial Naive Bayes, Bernoulli Naive Bayes, and Multi-variant Poisson Naive Bayes.

C. Performance Parameters
The following are the three parameters used in the performance analysis of the suggested work [17], [18]:  = ( + )/( +  +  + )  = /( + )  = /( + ) The amount of seizure parts found while using seizure segments is known as TP, or true positive.The number of non-seizure portions found with seizure elements is known as the false negative rate, or FN.False positives (FP) are segments of seizures that were mistakenly identified as nonseizures.TN stands for "true negative," which is the ratio of non-seizure variables to seizure elements Accuracy is defined as the ratio of correctly labeled seizure to non-seizure time periods.Sensitivity measures how well seizures are identified over time.Extremely impressive results in seizure segment recognition performance can be achieved with a highly sensitive classifier.The term "specificity" refers to the proportion of accurately diagnosed non-seizure segments.Segments free of seizures can be easily identified by a classifier with a high level of specificity.In five-fold cross-validation, accuracy served as a performance parameter.The total efficacy of the model is shown by the ROC curve (receiver operating characteristic curve).This approach is shown by the flowchart in Fig. 2. Table I illustrates the values of the aforementioned parameters for the various classifiers applied to the Nigerian dataset and Fig. 3 shows the ROC curve and Confusion Matrix.
Zabihi et al. [19] showed when the intersection sequence was established, it was possible to draw out seven distinctive features.When the characteristics are complete, it has been loaded through a two-tiered diagnosis system that uses linear discriminant analysis (LDA) and naive Bayesian classifiers.The suggested model is then tested, and its efficacy is demonstrated to be 94.6% accurate, with sensitivity at 88.27% and specificity at 93.21%.Using Nigerian data, we found that our suggested technique has a 99.16% Accuracy, 99.18% Sensitivity, and 99.01%Specificity for the Kernel SVM Classifier.Five-fold accuracies [0.99198026 0.99198026 0.99136336 0.99136336 0.99136336], average accuracy 0.9916101172115978 for Nigerian Data in case of Kernel SVM was achieved in this approach.

III. APPROACH II A. Data Description
There are 23257 EEG recordings from 13551 patients included in the TUH EEG Corpus.The dataset as a whole contains 73% of EEG data points that can be linked to abnormal sessions.A subset of the TUH EEG Corpus that was demographically balanced was hand-selected in order to produce the TUH EEG Abnormal Corpus.The majority of the recordings in this subset were taken at a sampling rate of 250 Hz and included 1488 aberrant EEG episodes and 1529 normal EEG sessions.Additional splitting produced two sets of data: a test set with 127 abnormal and 150 normal data, and a training set with 1361 abnormal and 1379 normal data.The balanced TUH EEG dataset is shown in Fig. 4. The raw data was filtered, any non-eeg signal was removed in order to provide extremely accurate results.

B. Configuration of proposal ChronoNet
Densely linked recurrent layers are used in conjunction with exponentially growing kernel lengths in inception layers to create 1D convolution layers.Understand how a state-ofthe-art recurrent unit called a gated recurrent unit (GRU) can help you learn dependencies and correlations across extended periods of time.Both LSTMs and GRUs have been shown to significantly outperform classical RNNs, it is still unclear which is superior [20].GRUs are used in this study instead of LSTMs since they require less parameters, allowing for faster training and generalization with the same amount of data.The inception module makes use of filters of various sizes as opposed to standard convolutional neural networks, which use a single, homogeneous filter in a convolution layer to record characteristics at multiple levels [15].Signal length (16000) times the number of channels (22) constitutes the input layer of the Method.There are four convolution layers that receive the Input Layer as their input.The number of filters and strides in each layer is uniform (32,2), while the kernel size is fixed at 2, 4, 8,16 for each layer.The resulting 128 channels are obtained by concatenating the resulting 32 channels from each convolution layer.There are a total of four concatenated layers, the result of the structure repeating itself after the initial concatenation.The next block is followed by a section consisting of four GRU layers.The output of the first and second GRU layers was concatenated, with input from the fourth Concatenation layer feeding the first GRU layer.The output of the third GRU layer is (1000,32).We used a total of four GRU layers, with the output of the third layer being combined with that of the first and second before the activation function was applied.The flow of the proposed ChronoNet Model is shown in Fig. 5.

C. Implementation of proposed ChronoNet using Keras Framework
A link to https://isip.piconepress.com/projects/tuheeg/downloads was used to retrieve the data.To filter EEG signals, a band pass filter with a range of 1-30 Hz was utilized at a sampling frequency of 128 Hz.The EEG data is divided into 4-second epochs.As an activation function, "relu" was applied to the fourth convolutional layer.Since we were working with a binary situation, we implemented the 'sigmoid' activation function in the GRU layer.Next, we used optimizer = "adam," loss = "binary cross entropy," and metrics = "accuracy" to train the model.There is an accuracy of 98.89% and loss of 2.58% after the model is fitted.The other performance parameters are given below in Table II.The ROC curve for approach II is given in Fig. 6.  [21] offered an approach for identifying epileptic seizures autonomously from EEG signals by integrating a diagnostic artificial neural network (LAMSTAR) with a multistage nonlinear pre-processing filter (ANN).System performance was around 97.2% after multistage nonlinear filtering was used for preprocessing, LAMSTAR input was prepared, ANN was trained, and the system was run through its paces.Our Proposed Method achieved an accuracy more than the latter one.TABLE III shows the comparison of the suggested methods with the relevant work.

A. Configuration of Proposed ResNet Method
Recent successes in the field of computer vision have been achieved by residual networks (ResNets), a ConvNet architecture [29].ResNets often have a lot of layers, therefore we looked at whether comparable networks with more layers would also perform well when decoding EEG.ResNets combine the result of a convolutional layer with its input, making the convolutional layer's output more accurate.The layer just needs to learn how to produce a residual that modifies the output of earlier layers (hence the name residual network).The training can be enhanced by utilizing the same implementation as unified CNN operators.Such a compute block is known as Split-Attention Block.Many Split-Attentions piled blocks in the ResNet fashion, we produce a fresh variation in which we use the term Split-Attention Network (ResNeSt).The ResNet-D model is the foundation of ResNeSt [30].ResNetD-50's accuracy is increased through mix-up training.
The knowledge level has been determined using the data from https://www.kaggle.com/datasets/madyanomar/eegdata-distance-learning-environment on the Pytorch framework.The understanding level is initially determined by interpreting the raw signal after the data has been gathered.
The groups were then established based on the subject and video identification number.Following that, images with a 3epoch duration are generated from the EEG signal (sampling frequency of 128 Hz).The images were of the form (16,14,384), where 16 was the number of epochs, 14 was the number of channels, and 384 was the length of one epoch.Continuous wavelet transformation (Morlet wavelet) is employed for scaling.Modified images are contributed to the training data when utilizing Auto-Augment employing a method where the modifications are adaptively learned.The batch size will then be set to 8 and the final torch size will be 8, 14, 230, 384.After that, the datasets have been split into training and validation set.Finally, RestNest 26d, 14d and 50d has been implemented.Only the first layer has to be changed according to the channel number of the transformed data.The first layer of the proposed network is:     [16] built a ResNeSt-50-fast model using the Split-Attention block, which increased accuracy to 80.64%.To prevent adding additional processing expenses to the model, the effective average down sampling is used in this ResNeSt-fast option before the 33 convolutions.With the down sampling process placed after the convolutional layer, ResNeSt-50 achieves 81.13% accuracy whereas our proposed ResNest-50d achieves an approximate 91% accuracy.

V. COMPARISON OF THREE PROPOSED WORK
A comparison is made to evaluate the performance parameters (F-1sccore, Precision, and Accuracy) of the three proposed approaches in this study.Fig. 9 shows the comparison.Although the precision remains the same for the three approaches, ResNest method gives lower accuracy and Precision than the other two approaches.

VI. CONCLUSION
A neurological disease's diagnosis often begins with determining if a Normal or aberrant brain activity can be shown during an EEG recording.Given that manual EEG interpretation is a costly and time-consuming operation, any classifier that automates this initial distinction will be able to shorten treatment periods and relieve clinical care providers.The Analysis of EEG signal can also be utilized to determine the understanding level which can be used for measuring intellectual ability.In this study, three strategies were utilized to analyze the EEG data; two of them were used to separate epileptics from healthy people, while the third technique was used to gauge the subjects' level of comprehension.The first approach included three machine learning classifiers and Welch Power Spectral Density with a maximum accuracy of 99.1%.The ResNest method is the third method to assess a person's capacity for abstract thought.The maximum accuracy we could achieve using ResNest50d (Epochs = 10) was 91%.

FURUTE WORK
For the parameters to perform better, there are three directions for future implementation.A larger dataset, ML models like AdaBoost, XGBoost, Light Gradient Boosting Machine, CatBoost, and innovative Deep Learning Methods might all be used to enhance the framework models.It can concentrate on feature selection techniques to get rid of insignificant features, improving performance.By tackling a variety of connected issues, such as determining risk levels and estimating the likelihood of recurrence, it can improve performance.
Fig. 1. a) The EEG signal; (b) Power Spectral Density of the first subject.

Fig. 2 .
Fig. 2. Flow chart of the Approach I (Power spectral density method).
a) ROC curve and (b) Confusion matrix for the Approach I (on Nigerian data, using Kernel SVM).

Fig. 5 .
Fig. 5. Flow chart of the Approach I (Power spectral density method).

Conv2d ( 14 ,
32, kernel_size =(3,3), stride = (2, 2), padding = (1, 1), bias=False) where, 14 = the no. of channels in the dataset 32= the no. of filters.The model Parameters are shown in Fig.7.For performing the 5-fold cross validation, the values of the epochs were set to 10 and 20 and the three above mentioned models were fitted accordingly.The Training and validation curve of the proposed three ResNest method (ResNest26d, ResNest14d and ResNest50d) with Epochs =10 is shown in Fig.8.TableIIIillustrates the values of the performance parameters for the three aforementioned models on the basis of different epochs values.

Fig. 9 .
Fig. 9.The comparison of the three proposed approaches.

TABLE I :
PERFORMANCE PARAMETERS FOR NIGERIAN

TABLE II :
PERFORMANCE PARAMETERS FOR CHRONONET METHOD Analyzing the Existing Works in Terms of the Performance Parameters Roy et al. [15] used linearly varying filter lengths of 3, 5, and 7 in a 1D convolution layer to produce an approximate 89.15% accuracy during training, whereas our proposed technique achieves an approximate 98.89% accuracy.Nigam et al.
Fig. 6.ROC curve for Approach II.46% 45% 5% 4% Balancing OF TUH EEG DATASET Training Normal data Training abnormal data Test Normal data Test abnormal data D.

TABLE III :
COMPARISON OF THE SUGGESTED METHODS THE RELEVANT WORK

TABLE IV
The second method uses Inception Convolutional Densely Connected Gated Recurrent Neural Network, a new characteristic of deep learning algorithms.By surpassing the 's best previously reported accuracy of 97.78%, this novel RNN architecture (ChronoNet) sets a new benchmark. dataset