首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于引导滤波器和加权二维主成分分析的视频融合
引用本文:徐丹,巩沛琪,郭松涛,王英,姚菁.基于引导滤波器和加权二维主成分分析的视频融合[J].重庆大学学报(自然科学版),2019,42(5):95-107.
作者姓名:徐丹  巩沛琪  郭松涛  王英  姚菁
作者单位:西南大学电子信息工程学院,重庆40715;河南漯河市第一高级中学,河南漯河,462000;西南大学电子信息工程学院,重庆40715;重庆大学计算机学院,重庆400044;香港大学理学院,香港,999077
基金项目:国家自然科学基金资助项目(61772432,61772433)。
摘    要:可见光视频可以提供纹理信息,而红外视频可以提供隐藏的热信息。通过2种视频的融合可以提高移动用户的视频观看体验。然而,由于移动设备资源有限,复杂的视频处理任务被迁移到资源(计算、存储和电池资源)相对丰富的微云上执行。基于均值哈希的帧间冗余检测算法被提出,将去冗之后的视频帧传输到微云等待处理,基于引导滤波器和加权二维主成分分析(W2DPCA,weighted two-dimensional principal component analysis)的视频融合算法被提出。该算法首先使用引导滤波器将待融合视频帧分成基层和细层,然后,利用改进的自适应W2DPCA融合可见光帧的基层和红外帧的基层。最后,通过组合融合基层和细层来获取融合帧。实验结果表明,帧间冗余检测方法最大限度地减少了微云中冗余数据的传输量,降低了移动设备的能耗。与现有方法相比,提出的视频融合算法得到的融合帧具有与原始帧更多的互信息和更高的结构相似度,同时融合结果也具有较高的整体标准差和峰值信噪比,所以具有更好的整体融合效果。

关 键 词:冗余检测  视频融合  可视视频  红外视频  微云
收稿时间:2019/1/20 0:00:00

Video fusion based on guided filter and weighted two-dimensional PCA
XU Dan,GONG Peiqi,GUO Songtao,WANG Ying and YAO Jing.Video fusion based on guided filter and weighted two-dimensional PCA[J].Journal of Chongqing University(Natural Science Edition),2019,42(5):95-107.
Authors:XU Dan  GONG Peiqi  GUO Songtao  WANG Ying and YAO Jing
Institution:College of Electronic andInformation Engineering, Southwest University, Chongqing 400715, P. R. China,LuoheFirst Senior High School, Luohe 462000, Henan P. R. China,College of Electronic andInformation Engineering, Southwest University, Chongqing 400715, P. R. China;College of Computer Science, Chongqing University, Chongqing 400044, P. R. China,College of Electronic andInformation Engineering, Southwest University, Chongqing 400715, P. R. China;Faculty of Science, University of Hong Kong, Hong Kong 99077, P. R. China and 1. College of Electronic andInformation Engineering, Southwest University, Chongqing 400715, P. R. China;2. LuoheFirst Senior High School, Luohe 462000, Henan P. R. China;3. College of Computer Science, Chongqing University, Chongqing 400044, P. R. China;4. Faculty of Science, University of Hong Kong, Hong Kong 99077, P. R. China
Abstract:Visible videos can mainly provide texture information, whereas infrared videos provide hidden hot information. The fusion of both kinds of videos can generate higher quality video viewing experience. However, due to the limited mobile device resources, the complicated video processing tasks are off loaded to the more powerful cloudlet with more sufficient resources (computing, storage, and battery resources) to be performed. In this paper the inter-frame redundancy detection algorithm based on the mean hash is proposed, and the video frames after redundancy removal are transmitted to the cloudlet for processing. The video fusion algorithm based on the guided filter and the weighted two-dimensional principal component analysis (W2DPCA) is proposed. The video frames to be fused are first divided into base layer and detail layer using guided filter. Then, the base layers of visible frame and infrared frame are fused by the improved adaptive W2DPCA. Finally, the fused frame is acquired by combination of the fusional base layer and the reserved detail layer. Experiment results show that the redundancy detection method minimizes the amount of redundant data transmission in cloudlets and reduces the energy consumption of mobile devices. Compared with the existing methods, the fused frame obtained by the video fusion algorithm in this paper has more mutual information and higher structural similarity to the original frame, and the fusion result also has higher overall standard deviation and peak signal-to-noise ratio thus having better overall integration effect.
Keywords:redundancy detection  video Fusion  visible video  infrared video  cloudlet
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《重庆大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《重庆大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号