Abstract
Recent studies have shown that alpha oscillations (8–13 Hz) enable the decoding of auditory spatial attention. Inspired by sparse coding in cortical neurons, we propose a spiking neural network model for auditory spatial attention detection. The proposed model can extract the patterns of recorded EEG of leftward and rightward attention, independently, and uses them to train the network to detect auditory spatial attention. Specifically, our model is composed of three layers, two of which are Integrate and Fire spiking neurons. We formulate a new learning rule that is based on the firing rate of pre- and post-synaptic neurons in the first and second layers of spiking neurons. The third layer has 10 spiking neurons and the pattern of their firing rate is used in the test phase to decode the auditory spatial attention of a given test sample. Moreover, the effects of using low connectivity rates of the layers and specific range of learning parameters of the learning rule are investigated. The proposed model achieves an average accuracy of 90% with only 10% of EEG signals as training data. This study also provides new insights into the role of sparse coding in both cortical networks subserving cognitive tasks and brain-inspired machine learning.
Original language | English |
---|---|
Pages (from-to) | 555-565 |
Number of pages | 11 |
Journal | Neural Networks |
Volume | 152 |
Early online date | 6 Jun 2022 |
DOIs | |
Publication status | Published - Aug 2022 |