首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于单隐层前馈神经网络的优化算法
引用本文:李娜,刘冰,王伟.基于单隐层前馈神经网络的优化算法[J].科学技术与工程,2019,19(1).
作者姓名:李娜  刘冰  王伟
作者单位:安阳师范学院计算机与信息工程学院,安阳,455000;安阳工学院计算机科学与信息工程学院,安阳,455000;哈尔滨工业大学电气工程与自动化学院,哈尔滨,150001
基金项目:国家自然科学基金,项目批准号:61034926;河南省高等学校重点科研项目,项目批准号:15A520037;
摘    要:前馈神经网络是神经网络中最常用的函数近似技术。根据普适定理,单隐层前馈神经网络(a single-hidden layer feedforward neural network,SFNN)可以任意接近相应的期望输出。一些研究人员使用遗传算法(genetic algorithms,GAs)探索FNN结构的全局最优解。然而,使用GAs来训练SFNN是相当费时。提出了一种新的SFNN优化算法。该方法是基于凸组合算法(convex combination algorithm,CCA)在隐含层上分析信息数据。事实上,该技术是将分类遗传演算法结合交叉策略的GAs算法。改进方法比GAs算法性能更优,但在进行学习和遗传演算前需要大量预处理工作如将数据分解为二进制代码。同时设置一个新的误差函数量化SFNN性能、获得连接权值最优选项以直接解决非线性优化问题。采用几个计算实验验证改进算法,结果表明改进方法更适合寻找单隐含层SFNN的最优权重。

关 键 词:前馈神经网络  神经网络训练  进化算法  遗传算法
收稿时间:2018/7/23 0:00:00
修稿时间:2018/10/29 0:00:00

A new optimization algorithm based on single hidden layer feedforward neural networks
li n,and.A new optimization algorithm based on single hidden layer feedforward neural networks[J].Science Technology and Engineering,2019,19(1).
Authors:li n  and
Institution:Anyang Normal University,,
Abstract:Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs.
Keywords:Feedforward neural networks  Training of neural networks  Evolutionary algorithm  genetic algorithms  
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《科学技术与工程》浏览原始摘要信息
点击此处可从《科学技术与工程》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号