首页 | 本学科首页   官方微博 | 高级检索  
     检索      

AN INFORMATION THEORETICAL APPROACH TO NEURAL NETWORKS
作者姓名:Caro  Lucas
作者单位:Electrical Engineering
基金项目:This work was supported in part by Tehran University grant number 708.
摘    要:The purpose of this paper is to present a unified theory of several differentneural networks that have been proposed for solving various computation, pattern recog-nition, imaging, optimization, and other problems. The functioning of these networks ischaracterized by Lyapunov energy functions. The relationship between the deterministicand stochastic neural networks is examined. The simulated annealing methods for findingthe global optimum of an objective function as well as their generalization by injectingnoise into deterministic neural networks are discussed. A statistical interpretation of thedynamic evolution of the different neural networks is presented. The problem of trainingdifferent neural networks is investigated in this general framework. It is shown how thisapproach can be used not only for analyzing various neural networks, but also for the choiceof the proper neural network for solving any given problem and the design of a trainingalgorithm for the particular neural network.


AN INFORMATION THEORETICAL APPROACH TO NEURAL NETWORKS
Caro Lucas.AN INFORMATION THEORETICAL APPROACH TO NEURAL NETWORKS[J].Journal of Systems Science and Complexity,1993(4).
Authors:Caro Lucas Electrical Engineering
Institution:Caro Lucas Electrical Engineering Department,Tehran University,P.O. Box 14155/6181,Tehran IRAN
Abstract:The purpose of this paper is to present a unified theory of several different neural networks that have been proposed for solving various computation, pattern recog- nition, imaging, optimization, and other problems. The functioning of these networks is characterized by Lyapunov energy functions. The relationship between the deterministic and stochastic neural networks is examined. The simulated annealing methods for finding the global optimum of an objective function as well as their generalization by injecting noise into deterministic neural networks are discussed. A statistical interpretation of the dynamic evolution of the different neural networks is presented. The problem of training different neural networks is investigated in this general framework. It is shown how this approach can be used not only for analyzing various neural networks, but also for the choice of the proper neural network for solving any given problem and the design of a training algorithm for the particular neural network.
Keywords:Neural networks  stochastic systems  information  energy entropy
本文献已被 CNKI 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号