摘要
为自动检测纺织面料的主成分,以100~200倍放大后拍摄的纯纺面料或主成分含量在50%以上的混纺面料图像为研究对象,提出了一种基于深度卷积神经网络的纺织面料主成分分类方法。首先对纺织图像进行裁剪及颜色空间转换;然后将图像输入卷积神经网络进行织物面料主成分分类训练;最后将待分类的纺织面料图像输入训练后的卷积神经网络中,得出纺织面料主成分分类结果。对棉、涤纶、腈纶、羊毛、天丝5类共4497张图像进行实验,实验结果显示:该方法对5类织物面料主成分分类准确率为96.53%;与其他卷积神经网络模型相比大幅降低了训练时间,减小了网络规模,提高了分类准确率。
To automatically test main components of textile fabrics,the pictures of pure textile fabric shot after amplification of 100-200 times or blended textile fabric with main component content more than50% were used as the objects of study to propose a main component classification method for textile fabrics based on deep convolutional neural network.Firstly,cropping and color space conversion were conducted for the pictures.Secondly,the pictures were input into the convolutional neural network to train main component classification for textile fabrics.Finally,the pictures of the textile fabric to be classified were input into the trained convolutional neural network,and the main component classification results of textile fabrics were obtained.4497 pictures of cotton,polyester,acrylic,wool and Tinsel were chosen for the experiment.The experiment results showed that the accuracy of the main component classification method for the 5 classes of fabrics is 96.53%.Compared with other convolutional neural network models,the proposed method reduces training time,decreases network size and improves the classification accuracy.
引文
[1]杨欣卉.近红外光谱在纤维成分含量定量分析中的应用研究进展[J].现代纺织技术,2017,25(2):37-42.
[2]胡觉亮.基于贝叶斯方法的织物分类研究[J].纺织学报,2004,25(1):48-49.
[3]应乐斌,戴连奎,吴俭俭,等.基于纤维纵向显微图像的棉/亚麻单纤维识别[J].纺织学报,2012,33(4):12-18.
[4]李彦冬,郝宗波,雷航.卷积神经网络研究综述[J].计算机应用,2016,36(9):2508-2515.
[5]景军锋,范晓婷,李鹏飞,等.应用深度卷积神经网络的色织物缺陷检测[J].纺织学报,2017,38(2):68-74.
[6]张宏伟,张凌婕,李鹏飞.基于深度卷积神经网络的织物花型分类[J].纺织高校基础科学学报,2017,30(2):261-265.
[7]冀中,刘青,聂林红,等.基于卷积神经网络的纹理分类方法研究[J].计算机科学与探索,2016,10(3):389-397.
[8]Perona P,Malik J.Scale-space and edge detection using anisotropic diffusion[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,2002,12(7):629-639.
[9]Howard A G,Zhu M,Chen B,et al.Mobilenets:Efficient convolutional neural networks for mobile vision applications[EB/OL].(2017-4-17)[2018-6-18].https://arxiv.org/abs/1704.04861.
[10]Yu F,Koltun V.Multi-scale context aggregation by dilated convolutions[EB/OL].(2016-4-30)[2018-6-18].https://arxiv.org/abs/1511.07122.
[11]Selvaraju R R,Cogswell M,Das A,et al.Grad-CAM:Visual explanations from deep networks via gradientbased localization[C]//2017 IEEE International Conference on Computer Vision(ICCV), Venice,Italy.IEEE Computer Society,2017:618-626.
[12]Szegedy C,Vanhoucke V,Ioffe S,et al.Rethinking the inception architecture for computer vision[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2016:2818-2826.
[13]Szegedy C,Ioffe S,Vanhoucke V,et al.Inception-v4,inception-resnet and the impact of residual connections on learning[C/OL]//AAAI Conference on Artificial Intelligence Thirty-First AAAI Conference on Artificial Intelligence,2017:4278-4284.(2016-2-4)[2018-6-18].https://arxiv.org/pdf/1602.07261.pdf.
[14]Chollet F.Xception:Deep learning with depthwise separable convolutions[C]//2017IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2017:1800-1807.