首页 | 本学科首页   官方微博 | 高级检索  
     检索      

一种面向包含式缓存的共享末级缓存管理策略
引用本文:娄冕,肖建青,张洵颖,吴龙胜,关刚强.一种面向包含式缓存的共享末级缓存管理策略[J].北京理工大学学报,2016,36(1):75-80.
作者姓名:娄冕  肖建青  张洵颖  吴龙胜  关刚强
作者单位:西安微电子技术研究所,陕西,西安710075;国防科技大学电子科学与工程学院,湖南,长沙410073
基金项目:国家"八六三"计划项目(2011AA120204);航天创新计划项目(YY2011-012)
摘    要:针对传统LRU替换策略无法感知包含式缓存时间局部性的问题,提出一种适用于包含式缓存的共享末级缓存(SLLC)管理策略. 通过提前将无用数据存储于一个开销较小的旁路缓存,可以避免其与复用频率较高数据对SLLC的资源竞争,同时维护了包含属性. 为进一步寻找复用性最低的数据作为替换对象,构建一种局部性检测电路,有助于将此类数据尽早驱逐出SLLC,文中提出一种统一的管理算法,受益于两种预测器的相互校准,从而达到无用块旁路和低重用块替换的目的. 实验结果表明,所提策略将SLLC缺失率平均降低21.67%,预测精度提升至72%,而硬件开销不到SLLC的1%. 

关 键 词:包含式缓存  管理策略  共享末级缓存  多核
收稿时间:9/8/2014 12:00:00 AM

A Shared Last-Level Cache Management Policy for Inclusive Cache
LOU Mian,XIAO Jian-qing,ZHANG Xun-ying,WU Long-sheng and GUAN Gang-qiang.A Shared Last-Level Cache Management Policy for Inclusive Cache[J].Journal of Beijing Institute of Technology(Natural Science Edition),2016,36(1):75-80.
Authors:LOU Mian  XIAO Jian-qing  ZHANG Xun-ying  WU Long-sheng and GUAN Gang-qiang
Institution:1.Xi'an Micro-Electronics Technique Institute, Xi'an, Shannxi 710075, China2.School of Electronic Science and Engineering, National University of Defense Technology, Changsha, Hunan 410073, China
Abstract:For the problem that the traditional LRU replacement is unaware of the temporal locality in inclusive cache, a shared last-level cache (SLLC) management policy was presented for inclusive cache. With a cost-less bypass buffer stored the useless data beforehand, the policy could avoid the resource competition in SLLC between these data and highly reused data, while it still maintains the inclusion property. To further find out the least reused blocks to replace, a temporal locality detector applied was helpful to evict these blocks from SLLC as early as possible. Finally, benefited from adjustment mutually between two predictors, a unified management algorithm was proposed to bypass the useless blocks and replace the less reused blocks. Test results show that the approach reduces miss rate by 21.67% on average and improves the prediction accuracy up to 72%, while requiring less than 1% overhead of SLLC.
Keywords:inclusive cache  management policy  shared last-level cache  multiprocessors
本文献已被 万方数据 等数据库收录!
点击此处可从《北京理工大学学报》浏览原始摘要信息
点击此处可从《北京理工大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号