首页 | 本学科首页   官方微博 | 高级检索  
     检索      

一种新的前馈神经网络删剪算法
引用本文:艾芳菊,李晓芳.一种新的前馈神经网络删剪算法[J].四川大学学报(自然科学版),2008,45(6):1352-1356.
作者姓名:艾芳菊  李晓芳
作者单位:1. 湖北大学数学与计算机科学学院,武汉,430062
2. 武昌成人中专,武汉,430060
摘    要:前馈神经网络中隐层神经元的个数与它的学习和泛化能力密切相关.通过广义逆矩阵算法解决最小二乘问题改进神经网络自构行学习算法,得到一种新的前馈神经网络删剪算法.将新算法用于已经训练好的大型网络,能删剪“冗余”的隐层神经元,得到一个最精简的神经网络.此精简的神经网络不需要重新训练仍能保持原有的性能,并且泛化能力很好.仿真实例说明此算法的有效性和可行性.

关 键 词:前馈神经网络  神经网络自构行学习(NNSCL)算法  广义逆矩阵(GIM)算法

A new feedforward neural network pruning algorithm
AI Fang-Ju and LI Xiao-Fang.A new feedforward neural network pruning algorithm[J].Journal of Sichuan University (Natural Science Edition),2008,45(6):1352-1356.
Authors:AI Fang-Ju and LI Xiao-Fang
Institution:College of Mathematics and Computer Science, Hubei University;The Adult Technical Secondary Shool in Wuchang
Abstract:The number of neurons in hidden layers of feedforward neural network is very relative to its learning ability and generalization ability. A new feedforward neural network pruning algorithm is obtained by improving the Neural Network Self-configuring Learning(NNSCL) algorithm through using Generalized Inverse Matrix(GIM) algorithm to solve the least-squares problem. For a large trained neural network, the new algorithm can remove redundant neurons in its hidden layers to obtain the minimum neural network which preserves its original performance without retraining after pruning and has good generalization ability. The simulation results demonstrate the effectiveness and the feasibility of the algorithm.
Keywords:feedforward neural network  neural network self-configuring learning(NNSCL) algorithm  generalized inverse matrix(GIM) algorithm
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《四川大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《四川大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号