首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于模糊神经网络局部强化学习在Robocup中的应用
引用本文:吴定会,李真,纪志成.基于模糊神经网络局部强化学习在Robocup中的应用[J].系统仿真学报,2007,19(16):3719-3723.
作者姓名:吴定会  李真  纪志成
作者单位:江南大学电气自动化研究所,江苏,无锡,214122
基金项目:江苏省高技术研究发展计划项目
摘    要:针对Robocup仿真组比赛中智能体的配合与动作选取,将模糊神经网络(FNN)和局部协调图动态角色分配与传统Q-学习相结合,提出了基于模糊神经网络的局部Q-学习。采用该方法,有效抑制了仿真平台中的噪声干扰,提高了动作选取的精度,解决了传统Q-学习中Q表占用内存空间过大的问题,增强了系统的泛化能力,并进一步缩短了学习时间,更好的满足比赛实时性的要求。将其运用于仿真组比赛的传球和射门模型中,验证了该方法的有效性。

关 键 词:Robocup仿真组  角色分配  强化学习  模糊神经网络
文章编号:1004-731X(2007)16-3719-05
收稿时间:2007-02-02
修稿时间:2007-02-022007-04-23

Application of Local Reinforcement Learning Based on Fuzzy Neural Network to Robocup
WU Ding-hui,LI Zhen,JI Zhi-cheng.Application of Local Reinforcement Learning Based on Fuzzy Neural Network to Robocup[J].Journal of System Simulation,2007,19(16):3719-3723.
Authors:WU Ding-hui  LI Zhen  JI Zhi-cheng
Institution:Institute of Electrical Automatic, Southern Yangtze University, Wuxi 214122, China
Abstract:For the agents' collaboration and actions selection in the Robocup Simulation Team, the method, called Local Q-Learning based on Fuzzy Neural Network, which combined the Fuzzy Neural Network and the dynamic role assignment in a local Co-operation Graphic, was proposed. With this method, the noise interference of the simulation terrace was well restrained, and the action selection precision was improved much. The problem that the Q-table of the traditional Q-learning occupied too much memory was well done. Moreover, this method also improved the generalization capability of the system, decreased the time cost to learn, and met the real-time character of the competition better. It was applied to the experiment of the pass and shoot models in the simulation team, whose result shows the validity of the method.
Keywords:robocup simulation team  role assignment  reinforcement learning  fuzzy neural network
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号