首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于改进YOLOv5s和传感器融合的目标检测定位
引用本文:郑宇宏,曾庆喜,冀徐芳,王荣琛,宋雨昕.基于改进YOLOv5s和传感器融合的目标检测定位[J].河北科技大学学报,2024,45(2):122-130.
作者姓名:郑宇宏  曾庆喜  冀徐芳  王荣琛  宋雨昕
作者单位:南京航空航天大学自动化学院
基金项目:国家自然科学基金(51505221);南京航空航天大学研究与实践创新项目(xcxjh20220337)
摘    要:针对无人车环境感知过程中相机无法提供道路目标的位置信息,激光雷达点云稀疏以致检测方面难以达到很好效果的问题,提出一种通过融合两者信息进行目标检测和定位的方法。采用深度学习中YOLOv5s算法进行目标检测,通过联合标定进行相机与激光雷达外参的获取以转换传感器之间的坐标,使雷达点云数据能投影到相机图像数据中,得到检测目标的位置信息,最后进行实车验证。结果表明,所提算法能在搭载TX2嵌入式计算平台的无人车自动驾驶平台上拥有27.2 Hz的检测速度,并且在一段时间的检测环境中保持12.50%的漏检率和35.32 m的最远识别距离以及0.18 m的平均定位精度。将激光雷达和相机融合,可实现嵌入式系统下的道路目标检测定位,为嵌入式平台下环境感知系统的搭建提供了参考。

关 键 词:传感器技术  深度学习  目标检测与定位  无人车环境感知  相机与LiDAR融合
收稿时间:2023/8/1 0:00:00
修稿时间:2023/12/5 0:00:00

Target detection and localization based on improved YOLOv5s and sensor fusion
ZHENG Yuhong,ZENG Qingxi,JI Xufang,WANG Rongchen,SONG Yuxin.Target detection and localization based on improved YOLOv5s and sensor fusion[J].Journal of Hebei University of Science and Technology,2024,45(2):122-130.
Authors:ZHENG Yuhong  ZENG Qingxi  JI Xufang  WANG Rongchen  SONG Yuxin
Abstract:As two important sensors in the process of unmanned vehicle environment perception, the camera cannot provide the position information of the road target, and the LiDAR point cloud is sparse, which makes it difficult to achieve good results in detection, so that a method was proposed which fuses the information of the two sensors for target detection and localization. YOLOv5s algorithm in deep learning was adopted for target detection, and the external parameters of camera and LIDAR were acquired through joint calibration to convert the coordinates between the sensors, so that the radar point cloud data can be projected into the camera image data, and finally the position information of the detected target was obtained. The real vehicle experiments were conducted. The results show that the algorithm can achieve a detection speed of 272 Hz on the unmanned vehicle autopilot platform equipped with TX2 embedded computing platform, and maintain a leakage rate of 1250%, a maximum recognition distance of 3532 m, and an average localization accuracy of 018 m over a period of time in the detection environment. The fusion of LiDAR and camera can achieve road target detection and localization in embedded system, providing a reference for the construction of environment perception systems on embedded platforms.
Keywords:sensor technology  deep learning  target detection and localization  unmanned vehicle environment perception  camera and LiDAR fusion
点击此处可从《河北科技大学学报》浏览原始摘要信息
点击此处可从《河北科技大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号