首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于通道特征聚合的行人重识别算法
引用本文:徐增敏,陆光建,陈俊彦,陈金龙,丁勇.基于通道特征聚合的行人重识别算法[J].应用科学学报,2023,41(1):107-120.
作者姓名:徐增敏  陆光建  陈俊彦  陈金龙  丁勇
作者单位:1. 桂林电子科技大学 数学与计算科学学院, 广西 桂林 541004;2. 桂林电子科技大学 计算机与信息安全学院, 广西 桂林 541004;3. 桂林安维科技有限公司, 广西 桂林 541010
基金项目:国家自然科学基金(No.61862015);广西科技基地和人才专项基金(No.2021AC06001);广西重点研发计划项目基金(No.AB17195025)资助
摘    要:在基于深度学习的行人重识别算法中,通道特征易被忽视而导致模型表达能力降低。为此,以ResNeSt50为骨干网络,借鉴SENet通道注意力特点在残差块末尾接入SE block,增强网络对通道特征的提取能力;针对ReLU函数因缺少控制因子而限制不同通道特征图对激活值的准确响应问题,引入一个动态学习因子来丰富通道特征权重信息,以形成新的加权激活函数Weighted ReLU(WReLU);基于分组卷积特征图局部而设计新的激活函数Leaky Weighted ReLU(LWReLU),有效提高不同位置的深度特征表达能力;在Split-Attention和SE block中应用LWReLU,改善Split-Attention对各组特征图的权重学习能力;利用circle loss改进损失函数,优化目标收敛过程,从而提高模型精度。实验结果表明:在CUHK03-NP、Market1501和DukeMTMC-ReID数据集上,所提方法的Rank-1比原骨干网络分别提高了19.08%、0.98%、2.02%,且其m AP比原骨干网络分别提高了17.13%、2.11%、2.56%。

关 键 词:分组卷积  通道注意力  修正线性单元  激活函数  动态学习因子
收稿时间:2022-06-23

Person Re-identification Algorithm Based on Channel Feature Aggregation
XU Zengmin,LU Guangjian,CHEN Junyan,CHEN Jinlong,DING Yong.Person Re-identification Algorithm Based on Channel Feature Aggregation[J].Journal of Applied Sciences,2023,41(1):107-120.
Authors:XU Zengmin  LU Guangjian  CHEN Junyan  CHEN Jinlong  DING Yong
Institution:1. School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China;2. School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China;3. Anview. ai, Guilin 541010, Guangxi, China
Abstract:In deep-learning person re-identification algorithms, channel characteristics may be neglected, leading to a degraded model-expression ability. Address to the problem, we choose the ResNeSt50 as backbone network, and add an SE block to the end of residual blocks by using characteristics of SENet channel attention for enhancing features extraction of channels in networks. In addition, due to lack of control factors, ReLU function may reduce the correct responses of different feature graphs to activation values. Thus, we present two new activation functions. One is named as Weighted ReLU (WReLU) by combining ReLU with weight bias term, which can effectively improve feature selection ability in neural networks, and the other is Leaky Weighted ReLU (LWReLU), which is applied in Split-Attention and SE block, and enables Split-Attention to promote the weight learning ability from feature maps. Moreover, a new loss function with circle loss is also proposed for optimizing the convergence of objective function. Experimental results show that the proposed algorithm outperforms original backbone by 19.08%, 0.98%, and 2.02% in Rank-1, and 17.13%, 2.11%, and 2.56% in mAP respectively on CUHK03-NP, Market1501, and DukeMTMC-ReID datasets.
Keywords:group convolution  channel attention  rectified linear unit  activation function  dynamic learning factor  
点击此处可从《应用科学学报》浏览原始摘要信息
点击此处可从《应用科学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号