东北大学学报:自然科学版 ›› 2015, Vol. 36 ›› Issue (3): 318-322.DOI: 10.12068/j.issn.1005-3026.2015.03.004

• 信息与控制 • 上一篇    下一篇

模型未知非零和博弈问题的策略迭代算法

杨明1, 罗艳红1, 王义贺2   

  1. (1. 东北大学 信息科学与工程学院, 辽宁 沈阳110819; 2. 国网辽宁省电力有限公司 经济技术研究院, 辽宁 沈阳110000)
  • 收稿日期:2014-01-08 修回日期:2014-01-08 出版日期:2015-03-15 发布日期:2014-11-07
  • 通讯作者: 杨明
  • 作者简介:杨明(1987-),男,山东临沂人,东北大学博士研究生.
  • 基金资助:
    国家自然科学基金资助项目(61104010); 高等学校博士学科点专项科研基金资助项目(20110042120032).

Policy Iteration Algorithm for Nonzero-Sum Games with Unknown Models

YANG Ming1, LUO Yan-hong1, WANG Yi-he2   

  1. 1. School of Information Science & Engineering, Northeastern University, Shenyang 110819, China; 2. Economic Technology Institute, Nation State Liaoning Province Power Co., Ltd., Shenyang 110000, China.
  • Received:2014-01-08 Revised:2014-01-08 Online:2015-03-15 Published:2014-11-07
  • Contact: YANG Ming
  • About author:-
  • Supported by:
    -

摘要: 提出了一种在线积分策略迭代算法,用来求解内部非线性动力模型未知的双人非零和博弈问题.通过在控制策略和干扰策略中引入探测信号,从而避开了系统的模型信息,得到了一个求解非零和博弈的无模型的近似动态规划算法.该算法同步更新值函数、控制策略、扰动策略,并且最终得到收敛的策略权值.在算法实现过程中,使用4个神经网络分别近似两个值函数、控制策略和扰动策略,使用最小二乘法估计神经网络的未知参数.最后仿真结果验证了算法的有效性.

关键词: 自适应动态规划, 非零和博弈, 策略迭代, 神经网络, 最优控制

Abstract: An online integral policy iteration algorithm was proposed to find the solution of two-player nonzero-sum differential games with completely unknown nonlinear continuous-time dynamics. Exploration signals can be added into the control and disturbance policies, rather than having to find the model information. An approximate dynamic programming (ADP) of model-free approach can be constructed, and the nonzero-sum games can be solved. The value function, control and disturbance policies simultaneously can be updated by the proposed algorithm, and converged policy weight parameters are obtained. To implement the algorithm, four neural networks are used respectively to approximate the two game value functions, the control policy and the disturbance policy. The least squares method is used to estimate the unknown parameters of the neural networks. The effectiveness of the developed scheme is demonstrated by a simulation example.

Key words: adaptive dynamic programming, nonzero-sum games, policy iteration, neural networks, optimal control

中图分类号: