摘要
为实现管道漏磁图像的智能化识别,提出一种基于改进SSD网络的管道漏磁图像识别算法。以SSD网络模型为基础框架,加入多孔卷积。利用多孔卷积扩大网络模型的感受野,将低分辨率的高语义信息特征提取出来,从而提高网络对小目标细节特征的学习能力。实验结果表明,提出的算法能自动识别出漏磁数据的环焊缝、螺旋焊缝、缺陷等位置,准确率能达到到92.62%,误检率低于3%,漏检率低于6%,具有更优良的鲁棒性。
In order to realize intelligent detection of pipeline magnetic flux leakage image,a pipeline magnetic flux leakage image detection algorithm based on depth learning is proposed.A dilated convolution is added based on SSD network model.Dilated convolution is used to expand the perception field of the network model and extract low-resolution high-semantic information features,so as to improve the learning ability of the network for small target details.The experimental results show that the proposed algorithm can automatically identify the location of circumferential weld,spiral weld and defect of magnetic flux leakage data.The accuracy can reach 92.62%,the mistake detection rate is less than 3%,the missed detection rate is less than 6%.It has better robustness.
引文
[1] FENG JIAN,LI FANG-MING,LU SEN-XIANG,et al.Injurious or noninjurious defectidentification from mfl images in pipeline inspect-ion using convolutional neural network[J].IEEE ransactions on Instrumentation and Measurement,2017,66(7):1883-1892.
[2] CHEN Z T,LIU L Q,SA I,et al.Learning context flexible attention model for long-term visual place recognition[J].IEEE Robotics & Automation Letters,2018,3(4):4015-4022.
[3] RELJA ARANDJELOVI■,PETR GRONAT,AKIHIKO TORII,et al.NetVLAD:CNN architecture for weakly supervised place recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,40(6):1437-1451.
[4] YU CHUN-YAN,XIAODAN XU,SHIJUN ZHONG.An improved SSD model for saliency target detection[J].Journal of Electronics and Information,2018,40(11):2554-2561.
[5] TANG CONG,LING YONG-SHUN,YANG HUA,et al.Visual tracking method based on deep learning object detection[J].Infrared and Laser Engineering,2018,47(9):148-158.
[6] T.TANG,S.L.ZHOU,Z.P DENG.Arbitrary-oriented vehicle detection in aerial imagery with single convolutional neural networks[J].Remote Sensing,2017,9(11):1-17.
[7] DENG ZH-IPENG,SUN HA,ZHOU SHI-LIN,et al.Multi-scale object detection in rimagery with convolutional neural networks[J].ISPRS Journal of Photogrammetry and Remote Sensing emote sensing,2018,145:3-22.
[8] Z.H.WANG,X.X.WANG,G.WANG.Learning fine-grained features via a CNN tree for Large-scale Classification[J].Neurocomputing,2018,275:1231-1240.
[9] X.YANG,X.B.GAO,B.SONG,et al.Aurora image search with contextual CNN feature[J].Neurocomputing 2018,281:67-77.
[10] J.WANG,H.ZHU,S.Y.YU,C.X.FAN.Object tracking using color-feature guided network generalization and tailored feature fusion[J].Neurocomputing,2017,271:387-398.
[11] H.LIU,B.LANG,M.LIU,et al.CNN and RNN based payload classification methods for attack detection[J].Knowledge-Based Systems,2019,163:332-341.
[12] JINYANG JIAO,MING ZHAO,JING LIN.A multivariate encoder information based convolutional neural network for intelligent fault diagnosis of planetary gearboxes[J].Knowledge-Based Systems,2018,160(9)237-250.
[13] GAO HONG-MIN,YAO YANG,SHENG LE,et al.Multi-branch fusion network for hyperspectral imageclassification[J].Knowledge-Based Systems,2019,167(20):11-25.
[14] YANG SONG,VI QIN-MIN,VIAN HU,et al.P-CNN:Enhancing text matching with positional convolutional neural network[J].Knowledge-Based Systems,2019.169(1):67-79.
[15] WANG SHU-HUI,JIAWEI XIANG,YONGTENG ZHONG.Convolutional neural network-based hidden Markov models for rolling element bearing fault identification[J].Knowledge-Based Systems,2018,144:65-76.
[16] XU HAI-JIAO,HUANG CHANG-QIN,WANG DIAN-HUI.Enhancing semantic image retrieval with limited labele-d examples via deep learning[J].Knowledge-Based Systems,2019,163:252-266.
[17] SAMET AKCAY,MIKOLAJ E.KUNDEGORSKI,et al.Using deep convolutional neural network architectures for object classification and detection within x-ray baggage security imagery[J].IEEE Transactions on Information Forensics and Security,2018,13:2203-2215.
[18] W.LIU,D.ANGUELOV,D.ERHAN,et al.SSD:single shot multibox detector[C].European Conference on Computer Vision.Springer,Cham,2016:21-37.