首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于知识蒸馏与ResNet的声纹识别
引用本文:荣玉军,方昳凡,田鹏,程家伟.基于知识蒸馏与ResNet的声纹识别[J].重庆大学学报(自然科学版),2023,46(1):113-124.
作者姓名:荣玉军  方昳凡  田鹏  程家伟
作者单位:中移(杭州)信息技术有限公司, 杭州 310000;重庆邮电大学 自动化学院, 重庆 400065
基金项目:教育部-中国移动科研基金资助项目(MCM20180404);国家自然科学基金(52272388)。
摘    要:针对声纹识别领域中存在信道失配与对短语音或噪声条件下声纹特征获取不完全的问题,提出一种将传统方法与深度学习相结合,以I-Vector模型作为教师模型对学生模型ResNet进行知识蒸馏。构建基于度量学习的ResNet网络,引入注意力统计池化层,捕获并强调声纹特征的重要信息,提高声纹特征的可区分性。设计联合训练损失函数,将均方根误差(MSE,mean square error)与基于度量学习的损失相结合,降低计算复杂度,增强模型学习能力。最后,利用训练完成的模型进行声纹识别测试,并与多种深度学习方法下的声纹识别模型比较,等错误率(EER,equal error rate)至少降低了8%,等错误率达到了3.229%,表明该模型能够更有效地进行声纹识别。

关 键 词:深度学习  知识蒸馏  声纹识别  说话人识别
收稿时间:2021/7/12 0:00:00

Voiceprint recognition based on knowledge distillation and ResNet
RONG Yujun,FANG Yifan,TIAN Peng,CHENG Jiawei.Voiceprint recognition based on knowledge distillation and ResNet[J].Journal of Chongqing University(Natural Science Edition),2023,46(1):113-124.
Authors:RONG Yujun  FANG Yifan  TIAN Peng  CHENG Jiawei
Institution:China Mobile Hangzhou Information Technology Co. Ltd., Hangzhou 310000, P. R. China;Chongqing University Posts & Telecommunication, College Automation, Chongqing 400065, P. R. China
Abstract:Aiming at the problem of channel mismatch in the field of voiceprint recognition and incomplete acquisition of voiceprint features under short speech or noise conditions,a method that combines traditional methods with deep learning is proposed, and the ResNet model is used as the student model to perform knowledge distillation on the I-Vector model as the teacher model. We construct a ResNet network based on metric learning, introduce an attentive statistics pooling layer, capture and emphasize the important information of voiceprint features, and improve the distinguishability of voiceprint features. The mean square error (MSE) is combined with the loss based on metric learning to reduce computational complexity and enhance model learning capabilities. Finally, the trained model is used for voiceprint recognition test, and compared with the voiceprint recognition model under a variety of deep learning methods. It''s found that the equal error rate (EER) is reduced by at least 8%, and the equal error rate has reached 3.229%, indicating that the model can perform speaker verification more effectively.
Keywords:deep learning  knowledge distillation  voiceprint recognition  speaker verification
点击此处可从《重庆大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《重庆大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号