首页 | 本学科首页   官方微博 | 高级检索  
     检索      

随机梯度算法的收敛性分析
引用本文:丁锋,杨家本.随机梯度算法的收敛性分析[J].清华大学学报(自然科学版),1999(1).
作者姓名:丁锋  杨家本
作者单位:清华大学,自动化系,北京,100084
基金项目:中国科学院资助项目,国家科技攻关项目 
摘    要:虽然随机梯度算法的计算量比最小二乘法要小得多,但是它的收敛速度很慢。为了提高随机梯度算法的收敛速度和参数估计精度,提出了遗忘梯度算法,它不仅具有较快的收敛速度,而且具有跟踪时变参数的能力。随机梯度算法的收敛性证明是辨识领域的一个研究难题,文章运用鞅收敛定理分析了它的收敛性,结果表明随机梯度算法给出的参数估计误差一致有界,在强持续激励条件下参数估计误差一致收敛于零。数字仿真表明提出的方法是有效的。

关 键 词:自动控制  参数估计  辨识  鞅收敛定理

Convergence analysis of stochastic gradient algorithms
DING Feng,YANG Jiaben.Convergence analysis of stochastic gradient algorithms[J].Journal of Tsinghua University(Science and Technology),1999(1).
Authors:DING Feng  YANG Jiaben
Abstract:Compared with least squares algorithms, stochastic gradient (SG) algorithms have less computational burden, but their convergence rate is very slow. In order to improve the convergence rate of SG algorithms, the forgetting factor SG algorithms (the forgetting gradient algorithms for short) are presented, and those have not only faster convergence rate but also good performance to track the time varying parameters. Martingale convergence theorem is applied to analyze the convergence of SG algorithms, and the results show that the parameter estimation error (PEE) given by SG algorithms is bounded uniformly and that the PEE consistently converges to zero under the strong persistent excitation condition. Simulation results indicate that the proposed algorithm works quite well.
Keywords:
本文献已被 CNKI 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号