首页 | 本学科首页   官方微博 | 高级检索  
     检索      

结合编解码器与知识蒸馏的图像超分辨率方法
引用本文:周兆京,王晓茹,姜竹青,门爱东,马龙.结合编解码器与知识蒸馏的图像超分辨率方法[J].重庆邮电大学学报(自然科学版),2022,34(6):987-994.
作者姓名:周兆京  王晓茹  姜竹青  门爱东  马龙
作者单位:北京邮电大学 人工智能学院, 北京 100876;北京市经济管理学校 信息技术系, 北京 100089;中国人民解放军 96962部队, 北京 102206
摘    要:针对传统的知识蒸馏方法无法在超分辨率网络之间传递有效信息的局限性,提出了一种适用于超分辨率任务的知识蒸馏训练框架。借鉴编解码器的结构特点,使用编码器去捕捉高清图像中的高频信息,利用性能较好的大网络去构建解码器,通过特征知识蒸馏的方式,将大网络中的高频信息传递给性能较弱的小网络,达到无需修改小网络的结构便能有效提升小网络高分辨率图像重建的效果,训练出轻量级的超分辨网络。通过与现有算法进行定量比较可知,提出的算法能在相同计算资源下取得更优的性能表现。

关 键 词:超分辨率  知识蒸馏  模型压缩
收稿时间:2021/6/12 0:00:00
修稿时间:2022/8/30 0:00:00

Image super-resolution based on codec and knowledge distillation
ZHOU Zhaojing,WANG Xiaoru,JIANG Zhuqing,MEN Aidong,MA Long.Image super-resolution based on codec and knowledge distillation[J].Journal of Chongqing University of Posts and Telecommunications,2022,34(6):987-994.
Authors:ZHOU Zhaojing  WANG Xiaoru  JIANG Zhuqing  MEN Aidong  MA Long
Institution:School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, P. R. China;Department of Information Technology, Beijing Economic Management School, Beijing 100089, P. R. China; NO.96962 Unit of People''s Liberation Army of China, Beijing 102206, P. R. China
Abstract:In order to solve the difficulty that super-resolution methods based on convolutional neural networks need enormous computation complexity and storage costs, this paper mainly studies the application of knowledge distillation in super-resolution tasks. Given the limitation that traditional knowledge distillation methods cannot transfer useful information between super-resolution networks, this paper presents a knowledge distillation training framework suitable for super-resolution tasks. The training framework mainly draws lessons from the structural characteristics of the codec, which uses the encoder to capture the high-frequency information in the high-definition image and then utilizes a large network with better performance to build the decoder. Finally, it can transmit the high-frequency information from a large network to a small network through feature knowledge distillation, so that the effect of high-resolution image reconstruction in the small network can be effectively improved without modifying the structure of the small network, and a lightweight super resolution network can be trained. Furthermore, quantitative comparison with the existing algorithms shows that the proposed method can achieve outstanding performance under the same computing resources.
Keywords:super-resolution  knowledge distillation  model compression
点击此处可从《重庆邮电大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《重庆邮电大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号