首页 | 官方网站   微博 | 高级检索  
     

分布式系统中基于非合作博弈的调度算法
引用本文:童钊,肖正,李肯立,刘宏,李俊.分布式系统中基于非合作博弈的调度算法[J].湖南大学学报(自然科学版),2016,43(10):148-154.
作者姓名:童钊  肖正  李肯立  刘宏  李俊
作者单位:(1.湖南师范大学 数学与计算机科学学院,湖南 长沙410012;2.湖南大学 信息科学与工程学院,湖南 长沙410082)
摘    要:针对分布式系统中任务调度问题,根据分布式环境下的任务调度特性,建立了一个非合作博弈的多角色任务调度框架,在此基础上提出了一种基于纳什均衡联合调度策略的分布式强化学习算法.相比于静态调度算法,该算法需要更少的系统知识.能使调度器主动学习任务到达和执行的相关先验知识,以适应相邻调度器的分配策略,目标是使得调度器的策略趋向纳什均衡.模拟实验结果表明:所提出的算法在任务的预期时间和公平性上相对于OLB(机会主义负载均衡)、MET(最小执行时间)、MCT(最小完成时间)等同类调度算法具有更好的调度性能.

关 键 词:分布式计算  强化学习  任务调度  负载均衡

Scheduling Algorithm in Distributed Systems Based on Non-Cooperative Game
Affiliation:(1.College of Mathematics and Computer Science, Hunan Normal Univ, Changsha, Hunan410012, China;2.College of Computer Science and Electronic Engineering, Hunan Univ, Changsha, Hunan410082, China)
Abstract:To address the task scheduling problem in distributed systems, based on an important feature of task scheduling in distributed computing environment, we have established a non-cooperative game framework for multi-layer multi-role, and put forward a distributed reinforcement learning algorithm of the joint scheduling strategy of Nash equilibrium. Compared with static scheduling algorithm, the proposed algorithm needs less system information. It enables the scheduler to actively learn task arrival, perform related knowledge and adapt to the adjacent scheduler allocation policy. The target is to move the schedulers strategy toward Nash equilibrium. Simulation experiments show that the proposed algorithm achieves excellent performance in expected response time of tasks and fairness, compared with classical scheduling algorithms such as OLB, MET and MCT.
Keywords:distributed computing  reinforcement learning  task scheduling  load balance
本文献已被 CNKI 等数据库收录!
点击此处可从《湖南大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《湖南大学学报(自然科学版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号