dc.contributor.author |
Rajpura, Param |
|
dc.contributor.author |
Meena, Yogesh Kumar |
|
dc.contributor.other |
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2024) |
|
dc.coverage.spatial |
United States of America |
|
dc.date.accessioned |
2024-12-27T10:47:03Z |
|
dc.date.available |
2024-12-27T10:47:03Z |
|
dc.date.issued |
2024-07-15 |
|
dc.identifier.citation |
Rajpura, Param and Meena, Yogesh Kumar, "Towards optimising EEG decoding using post-hoc explanations and domain knowledge", in the 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2024), Orlando, US, Jul. 15-19, 2024. |
|
dc.identifier.uri |
https://doi.org/10.1109/EMBC53108.2024.10781846 |
|
dc.identifier.uri |
https://repository.iitgn.ac.in/handle/123456789/10894 |
|
dc.description.abstract |
Decoding Electoencephalography (EEG) during motor imagery is pivotal for the Brain-Computer Interface (BCI) system, influencing its overall performance significantly. As end-to-end data-driven learning methods advance, the challenge lies in balancing model complexity with the need for human interpretability and trust. Despite strides in EEG-based BCIs, challenges like artefacts and low signal-to-noise ratio emphasise the ongoing importance of model transparency. This work proposes using post-hoc explanations to interpret model outcomes and validate them against domain knowledge. Leveraging the GradCAM post-hoc explanation technique on the EEG motor movement/imagery dataset, this work demonstrates that relying solely on accuracy metrics may be inadequate to ensure BCI performance and acceptability. A model trained using all EEG channels of the dataset achieves 72.60% accuracy, while a model trained with motor-imagery/movement-relevant channel data has a statistically insignificant decrease of 1.75%. However, the relevant features for both are very different based on neurophysiological facts. This work demonstrates that integrating domain-specific knowledge with Explainable AI (XAI) techniques emerges as a promising paradigm for validating the neurophysiological basis of model outcomes in BCIs. Our results reveal the significance of neurophysiological validation in evaluating BCI performance, highlighting the potential risks of exclusively relying on performance metrics when selecting models for dependable and transparent BCIs. |
|
dc.description.statementofresponsibility |
by Param Rajpura and Yogesh Kumar Meena |
|
dc.language.iso |
en_US |
|
dc.publisher |
Institute of Electrical and Electronics Engineers (IEEE) |
|
dc.subject |
Brain-computer interfaces |
|
dc.subject |
Explainable AI |
|
dc.subject |
Motor imagery |
|
dc.subject |
EEG |
|
dc.title |
Towards optimising EEG decoding using post-hoc explanations and domain knowledge |
|
dc.type |
Conference Paper |
|