首页 | 官方网站   微博 | 高级检索  
     

MDF-ANet:低光照环境下的无人驾驶视觉融合分割方法
引用本文:常亮,白,胡会会,钟宏亮.MDF-ANet:低光照环境下的无人驾驶视觉融合分割方法[J].北京理工大学学报,2022,42(1):97-104.
作者姓名:常亮    胡会会  钟宏亮
作者单位:1. 同济大学 汽车学院, 上海 200082;
摘    要:为解决低光照条件下已有图像分割模型性能降低的问题,提出了一个基于RGB和深度图特征融合网络的MDF-ANet图像分割方法.为了对原始数据进行充分的特征学习,采用两路特征提取网络分别提取RGB和深度图特征;设计了一个特征融合模块,分别将两路特征提取网络对应尺度下的输出特征图通过融合模块进行融合,并作为RGB网络下一层的输入,通过不受光照条件影响的深度图来辅助RGB的特征提取;将各个尺度输出的特征图输入多尺度上采样融合模块,进行不同感受野间的信息互补,再上采样至原始输入图像大小,得到分割图像.在Cityscapes及其转化后的低光照图像上进行了一系列实验,在其验证集上取得了62.44%的均交并比(mean intersection over union,mIOU),相比只使用RGB输入的模型,性能提高了9.1%,达到了在低光照条件下提高图像分割性能的目的. 

关 键 词:语义分割    卷积神经网络    注意力机制    特征融合    低光照
收稿时间:2021/3/22 0:00:00

MDF-ANet:Vision Fusion Semantic Segmentation for Low-Light Autonomous Driving
CHANG Liang,BAI Jie,HU Huihui,ZHONG Hongliang.MDF-ANet:Vision Fusion Semantic Segmentation for Low-Light Autonomous Driving[J].Journal of Beijing Institute of Technology(Natural Science Edition),2022,42(1):97-104.
Authors:CHANG Liang  BAI Jie  HU Huihui  ZHONG Hongliang
Affiliation:1. School of Automotive Studies, Tongji University, Shanghai 200082, China;2. Innovation Academy for Microsatellites, Chinese Academy of Sciences, Shanghai 201210, China
Abstract:To solve the problem of performance degradation of existing segmentation model with low light condition, MDF-ANet based on RGB and Depth feature fusion network was proposed. First, the two-way feature extraction network was arranged to extract the RGB and depth features respectively, so as to perform sufficient feature learning on the original data. Then, a feature fusion module was designed to fuse the output feature maps of two-way feature extraction network at different scales. Finally, the fused output feature maps of each scale were input into the multi-scale upsampling fusion module to learn complementary information between different receptive fields. And then upsampling was performed to obtain the final segmented image with a size equal to the original input size. A series of experiments were carried out on Cityscapes and its converted faked-night dataset. Results show that 62.44% mean intersection over union (mIOU) is achieved on validating dataset. Compared with models trained with only RGB, the proposed model can obtain a 9.1% improvement, improving the image segmentation performance with low light condition effectively.
Keywords:semantic segmentation  convolutional neural network  attention mechanism  feature fusion  low-light
本文献已被 万方数据 等数据库收录!
点击此处可从《北京理工大学学报》浏览原始摘要信息
点击此处可从《北京理工大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号