基于卷积神经网络的连续语音识别 |
| |
引用本文: | 张晴晴,刘勇,潘接林,颜永红. 基于卷积神经网络的连续语音识别[J]. 北京科技大学学报, 2015, 0(9): 1212-1217. DOI: 10.13374/j.issn2095-9389.2015.09.015 |
| |
作者姓名: | 张晴晴 刘勇 潘接林 颜永红 |
| |
作者单位: | 中国科学院语言声学与内容理解重点实验室,北京,100190 |
| |
基金项目: | 国家自然科学基金资助项目,中国科学院战略性先导科技专项,国家高技术研究发展计划资助项目,中国科学院重点部署项目 |
| |
摘 要: | 在语音识别中,卷积神经网络( convolutional neural networks,CNNs)相比于目前广泛使用的深层神经网络( deep neural network,DNNs),能在保证性能的同时,大大压缩模型的尺寸。本文深入分析了卷积神经网络中卷积层和聚合层的不同结构对识别性能的影响情况,并与目前广泛使用的深层神经网络模型进行了对比。在标准语音识别库TIMIT以及大词表非特定人电话自然口语对话数据库上的实验结果证明,相比传统深层神经网络模型,卷积神经网络明显降低模型规模的同时,识别性能更好,且泛化能力更强。
|
关 键 词: | 卷积神经网络 连续语音识别 权值共享 聚合 泛化性 |
Continuous speech recognition by convolutional neural networks |
| |
Abstract: | Convolutional neural networks ( CNNs ) , which show success in achieving translation invariance for many image processing tasks, were investigated for continuous speech recognition. Compared to deep neural networks ( DNNs) , which are proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the neural network model sizes significantly, and at the same time achieve even a better recognition accuracy. Experiments on standard speech corpus TIMIT and conversational speech corpus show that CNNs outperform DNNs in terms of the accuracy and the generalization ability. |
| |
Keywords: | convolutional neural networks continuous speech recognition weight sharing pooling generalization |
本文献已被 万方数据 等数据库收录! |
|