Please use this identifier to cite or link to this item: http://localhost:8081/jspui/handle/123456789/18258
Title: COGNITIVE WORKLOAD ESTIMATION USING ELECTROENCEPHALOGRAPHY (EEG) SIGNALS
Authors: Chakladar, Debashis Das
Keywords: Electroencephalography, Convolutional neural network, Long Short- Term Memory, Variational Autoencoder, Common Spatial Pattern, Convolutional Block Attention Module, Brain connectivity
Issue Date: Nov-2022
Publisher: IIT, Roorkee
Abstract: Identifying proper workload levels using Electroencephalography (EEG) is an emerging research topic in the recent world, as heavy cognitive workload often decreases humans’ abilities in terms of reaction and operation time [1]. However, due to poor spatial resolution issues, features obtained from EEG signals often lead to poor classification results. Moreover, features obtained from high-dimensional EEG may contain irrelevant and redundant information. Therefore, selecting a task-specific optimum feature set is a crucial part of EEG-based workload classification. This thesis emphasizes two approaches for improving the spatial resolution of EEG. The first approach (chapter 3) highlights the attention-based spatial features to identify the activated brain regions (in terms of scalp electrodes) for the cognitive state. The attention-based deep model achieves 83.13% classification accuracy with four workload levels (baseline, low, medium and high workload). In the second approach (chapter 6), Filter Bank Common Spatial Pattern (FBCSP) is implemented to enhance spatial resolution by extracting EEG band-wise spatial filters. The FBCSP method-enabled deep classification model achieves 87.00% classification accuracy with two workload levels (task and resting state). The above-mentioned experiments are performed on EEG-based mental arithmetic (MA) tasks. In chapter 4, the curse of dimensionality issue of EEG is resolved by proposing the Grey Wolf Optimizer (GWO)-based feature selection method. The GWO-based deep classification model reaches the maximum classification accuracy of 86.33% for multitasking activity. In chapter 5, a deep representation learning-based clustering approach is proposed to identify workload levels from multiple stimuli (visual and auditory)-based n-back task. The proposed model achieves a mean clustering accuracy of 95.20%. After classifying the workload levels, it is essential to highlight how the activation of brain regions (EEG channels) changes with human cognition. The brain connectivity network identifies the information flow among different brain regions for each cognitive state. In chapter 6, an efficient brain connectivity network is constructed using functional and effective connectivity methods, emphasizing the information flow among the brain regions for the MA task (having two workload classes: task and resting state). This thesis improves EEG-based human cognition by enhancing the spatial resolution of EEG, proposing a robust feature selection method, and understanding human brain dynamics for each cognitive state. Due to the volume conduction effects, EEG suffers from poor spatial resolution, which often leads to spurious connectivity results. Two above-mentioned proposed approaches (highlighted in chapter 3 and chapter 6) for improving spatial resolution can be used in future research to strengthen the underlying EEG-based brain connectivity performance. In the near future, the proposed brain connectivity network can be applied in various EEG-based cognitive applications (motor imagery, digit span, driving simulation etc.) or medical applications (seizure detection, attention-deficit/hyperactivity disorder: ADHD, etc.) to identify the excited brain regions and to perform communication among those regions.
URI: http://localhost:8081/jspui/handle/123456789/18258
Research Supervisor/ Guide: Roy, Partha Pratim
metadata.dc.type: Thesis
Appears in Collections:DOCTORAL THESES (CSE)

Files in This Item:
File Description SizeFormat 
DEBASHIS DAS CHAKLADAR 18911003.pdf20.72 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.