首页 | 本学科首页   官方微博 | 高级检索  
     

多尺度特征增强的路面裂缝检测方法
引用本文:翟军治, 孙朝云, 裴莉莉, 呼延菊, 李伟. 多尺度特征增强的路面裂缝检测方法[J]. 交通运输工程学报, 2023, 23(1): 291-308. doi: 10.19818/j.cnki.1671-1637.2023.01.022
作者姓名:翟军治  孙朝云  裴莉莉  呼延菊  李伟
作者单位:1.长安大学 信息工程学院,陕西 西安 710064;;2.东南大学 交通学院,江苏 南京 211189
基金项目:国家重点研发计划2021YFB1600205国家自然科学基金项目52178407国家自然科学基金项目51978071陕西省重点研发计划2022JBGS3-08中央高校基本科研业务费专项资金项目300102242901
摘    要:针对路面裂缝检测不完整和分割出现断裂的问题,提出了一种多尺度特征增强的路面裂缝检测网络MFENet,实现端到端的路面裂缝图像检测、分类和分割处理;设计了多尺度注意力特征增强模块,建立了网络模型的上层多尺度特征通道与底层特征通道权重系数之间的映射关系,以提升有效通道的特征输出;基于路面裂缝的坐标信息和像素语义信息在物理位置上的相关性,设计了多语义特征关联模块,实现不同语义信息之间的特征融合增强,并通过特征维度转换实现对路面裂缝图像的前景特征过滤;提出了一种针对深度特征强度进行量化评估的方法,用于提升模型提取特征能力的可解释性。在自采集数据集上的研究结果表明:MFENet对路面裂缝图像检测的平均精准率和平均召回率相比Mask R-CNN分别提升了4.3%和5.4%,相比基线模型RDSNet分别提升了14.6%和14.3%;MFENet对路面裂缝图像分割的平均精准率和平均召回率相比Mask R-CNN分别提升了6.6%和8.8%,相比RDSNet分别提升了8.1%和9.7%;与Mask R-CNN等主流方法相比,MFENet对不同类型路面裂缝图像的检测、分割精度最高。在公开数据集(CFD、CRACK500)上的研究结果表明:在不同场景下的数据集上,MFENet的检测、分割精度均高于Mask R-CNN等主流方法,模型的鲁棒性更强。另外与RDSNet相比,MFENet在不同数据集上的处理速度也均有所提升。

关 键 词:路面裂缝检测   多尺度注意力   特征增强   多语义   可解释性   鲁棒性
收稿时间:2022-09-02

Pavement crack detection method based on multi-scale feature enhancement
ZHAI Jun-zhi, SUN Zhao-yun, PEI Li-li, HUYAN Ju, LI Wei. Pavement crack detection method based on multi-scale feature enhancement[J]. Journal of Traffic and Transportation Engineering, 2023, 23(1): 291-308. doi: 10.19818/j.cnki.1671-1637.2023.01.022
Authors:ZHAI Jun-zhi  SUN Zhao-yun  PEI Li-li  HUYAN Ju  LI Wei
Affiliation:1. School of Information Engineering, Chang'an University, Xi'an 710064, Shaanxi, China;;2. School of Transportation, Southeast University, Nanjing 211189, Jiangsu, China
Abstract:To solve the problems of incomplete pavement crack detection and discontinuous segmentation, a detection network MFENet for pavement cracks based on multi-scale feature enhancement was proposed, and the detection, classification and segmentation of end-to-end pavement crack images were realized. A multi-scale attention-based feature enhancement module was designed, and the mapping relationships of the weight coefficients of the upper multi-scale feature channels with those of the lower feature channels in the network model were determined to highlight the feature outputs from the effective channels. Based on the correlation between the coordinate information of the pavement crack and the semantic information of the pixels in physical location, a multi-semantic feature correlation module was designed and thereby feature fusion and enhancement among different semantic information were achieved. Then, the foreground features of the pavement crack image were filtered by feature dimension transformation. A quantitative evaluation method for deep feature intensity was proposed to improve the interpretability of the model's feature extraction ability. Research results on self-collected dataset show that the average precision and average recall of the MFENet in pavement crack image detection are 4.3% and 5.4% higher than those of the Mask R-CNN, respectively, and 14.6% and 14.3% higher than those of the baseline model RDSNet, respectively. The average precision and average recall of the MFENet in pavement crack image segmentation are 6.6% and 8.8% higher than those of the Mask R-CNN, respectively, and 8.1% and 9.7% higher than those of the RDSNet, respectively. In the comparison with the Mask R-CNN and other mainstream methods, the images of different types of pavement cracks are detected and segmented with the highest accuracy by the MFENet. Research results on public datasets (CFD and CRACK500) show that the detection and segmentation accuracy of the MFENet are invariably higher than those of the Mask R-CNN and other mainstream methods on the datasets covering different scenarios, indicating the higher robustness of the proposed method. In addition, the processing speed of the MFENet is also faster than that of the RDSNet on different datasets. 
Keywords:pavement crack detection  multi-scale attention  feature enhancement  multi-semantic  interpretability  robustness
点击此处可从《交通运输工程学报》浏览原始摘要信息
点击此处可从《交通运输工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号