首页 | 本学科首页   官方微博 | 高级检索  
     

基于注意力与多尺度的4类脑电信号解码
引用本文:任玲玲,王力,黄学文,詹倩倩. 基于注意力与多尺度的4类脑电信号解码[J]. 科学技术与工程, 2022, 22(34): 15180-15187
作者姓名:任玲玲  王力  黄学文  詹倩倩
作者单位:广州大学电子与通信工程学院;广州市信息处理与传输重点实验室
摘    要:为了增加主动式脑-机接口(BCI)指令集的数量,提出了一种基于运动想象和言语想象的时序编码实验范式。通过将1个运动想象和1个言语想象分时序进行,获得了4类想象方式:1)运动想象;2)言语想象;3)先运动想象再言语想象;4)先言语想象再运动想象。针对上述实验范式的脑电信号设计一种基于注意力与多尺度神经网络(AMEEGNet):首先通过一个空洞卷积和三个不同大小尺度的二维卷积提取信号的鲁棒性时间表示;然后使用深度卷积和可分离卷积提取空间特征和频域特征;此外,在模型中添加挤压激励模块,以自适应提取具有高分类精度的特征;最后采用一个具有全连接的网络层进行分类。该模型在拥有四类想象的时序编码实验数据集上获得了71.1%的平均准确度,且在同一数据集上EEGNet,MMCNN,Shallow ConvNet,TSGL-EEGNet分别取得57.9%,60.5%,68.3%,68.4%的精度,可见所提模型识别准确率最高。

关 键 词:脑-机接口   运动想象   言语想象   时序编码   多尺度卷积;注意力
收稿时间:2022-03-18
修稿时间:2022-09-10

Decoding motor and speech imagery EEG signals based on attention and multiscale model
Ren Lingling,Wang Li,Huang Xuewen,Zhan Qianqian. Decoding motor and speech imagery EEG signals based on attention and multiscale model[J]. Science Technology and Engineering, 2022, 22(34): 15180-15187
Authors:Ren Lingling  Wang Li  Huang Xuewen  Zhan Qianqian
Affiliation:School of Electronics and Communication Engineering
Abstract:In order to increase the number of active brain computer interface (BCI) instruction sets, an experimental paradigm of sequential coding based on motor imagery and speech imagery is proposed. By dividing one motor imagery and one speech imagery, four kinds of imagination modes are obtained: 1) Motor imagery; 2) Speech imagery; 3) Motor imagery followed by speech imagery; 4) Speech imagery followed by motor imagery. An attention and multi-scale neural network (AMEEGNet) is designed for the data set. Firstly, the robust time representation of the signal is extracted by dilated convolution and three two-dimensional convolution with different size scales. Then, deep convolution and separable convolution are used to extract spatial features and frequency domain features, respectively. In addition, the squeeze-excitation module is added to the model to extract features with high classification accuracy adaptively. Finally, a network layer with full connection is used for classification. The model achieves an average accuracy of 71.1% on a temporal coding experiment dataset with four kinds of imagination. One the same dataset, EEGNet, MMCNN, Shallow ConvNet and TSGL-EEGNet achieved 57.9%, 60.5%, 68.3% and 68.4% accuracy, respectively. It can be seen that the proposed model has the highest recognition accuracy.
Keywords:
点击此处可从《科学技术与工程》浏览原始摘要信息
点击此处可从《科学技术与工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号