首页 | 本学科首页   官方微博 | 高级检索  
     检索      

线性化逐层优化MLP训练算法
引用本文:周志杰,胡光锐,李群.线性化逐层优化MLP训练算法[J].上海交通大学学报,1999,33(1):15-18.
作者姓名:周志杰  胡光锐  李群
作者单位:1. 上海交通大学电子工程系
2. 南京通信工程学院
摘    要:提出了线性化逐层优化MLP训练算法(LOLL).LOLL采用循环方式逐层对MLP的连接权值进行训练.训练连接权值时用一阶泰勒级数表示神经元的非线性激活函数以实现神经网络的线性化,使MLP的训练问题转化为一个线性问题.同时,为保证神经网络线性化条件不被破坏,LOLL通过在神经网络的误差函数中计入部分线性化误差限制参数的改变幅度,对神经网络的误差函数进行了修正.实验结果显示,LOLL训练算法的速度比传统的BP算法快4倍,用它构成的语音信号非线性预测器有较好的预测性能.

关 键 词:神经网络  多层感知器  快速算法  语音处理  语音分析
修稿时间:1998-01-21

Linearized Optimization Layer by Layer for MLP Training
Zhou Zhijie,Hu Guangrui,Li Qun.Linearized Optimization Layer by Layer for MLP Training[J].Journal of Shanghai Jiaotong University,1999,33(1):15-18.
Authors:Zhou Zhijie  Hu Guangrui  Li Qun
Abstract:The main idea of this algorithm was linearization of MLP and the entire MLP was trained layer by layer. The activation function of neuron was replaced by its first order Taylor series expansion for linearization. A penalty term proportional to linearization error was added to the cost function to guarantee the validation of linearization. The experimental results show that the training speed of LOLL reaches 4 times the speed of conventional BP, and the speech nonlinear predictor based on LOLL algorithm performs well.
Keywords:neural networks  multi  layer perceptron(MLP)  fast algorithm  speech processing  speech analysis  
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号