基于深度学习的棉花发育期自动观测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Automatic Detection of Cotton Growth Stages Based on Deep Learning
  • 作者:胡锦涛 ; 王苗苗 ; 李涛 ; 吴东丽 ; 田东哲
  • 英文作者:HU Jin-tao;WANG Miao-miao;LI Tao;Zhongyuan Central Plains Phtoelectric Co., Ltd.;
  • 关键词:作物生长观测 ; 图像识别 ; 深度学习 ; 卷积神经网络
  • 英文关键词:Crop observation;;Image recognition;;Deep learning;;Convolutional neural network
  • 中文刊名:AHNY
  • 英文刊名:Journal of Anhui Agricultural Sciences
  • 机构:河南中原光电测控技术有限公司;中国气象局气象探测中心;
  • 出版日期:2019-06-12 08:46
  • 出版单位:安徽农业科学
  • 年:2019
  • 期:v.47;No.624
  • 语种:中文;
  • 页:AHNY201911068
  • 页数:5
  • CN:11
  • ISSN:34-1076/S
  • 分类号:245-248+251
摘要
精准农业是当今世界农业发展的新趋势,实现精准农业的关键基础是能够实时准确地提取作物的生长信息以及确定生长环境状态。现阶段国内外利用图像处理技术对作物生长信息的检测,主要集中在病虫害识别、杂草识别等方面,对作物生长期进行自动识别的相关技术鲜有报道。以棉花田间数字图像为研究对象,结合深度学习的方法,对棉花关键发育期的自动观测方法进行研究。结果表明,相较于传统特征提取方法,提出了卷积神经网络CNN-CGS模型对棉花图像进行特征提取,并进一步结合迁移学习方法训练网络,获得了更加准确的棉花生长期识别结果,同时也为农作物发育期和长势识别迈向自动化发展提供技术支持,为及时掌握棉花生长状况、开展农事活动和现代化农田管理提供新的思路。
        Precision agriculture is a trend of current agricultural development, the key to achieve that is the ability to accurately extract crop growth information and determine the state of the growing environment in real time. Most detection methods based on image processing technology for crop growth information mainly focuses on the identification of pests, diseases and weeds. There are few reports on the automatic detection of crop growth period. In this research, the digital image of cotton field was taken as the research object, and the automatic observation method of key development period of cotton was studied through deep learning method. Compared with the traditional feature extraction methods, we adopted the convolutional neural network(CNN) to extract the features of cotton images, and further combined the transfer learning method to train CNN, which obtained more accurate cotton growth period identification results. Meanwhile, it also provided technical support for the automatic identification of crop growth stages and states. On the other hand, a new idea was presented for real-time acquisition of cotton growth status, development of agricultural activities and modern farmland management, and scientific assessment of the impact of meteorological factors on cotton.
引文
[1] 徐贵力,毛罕平,李萍萍.缺素叶片彩色图像颜色特征提取的研究[J].农业工程学报,2002,18(4):150-154.
    [2] 吴茜.基于图像处理技术的棉花发育期自动观测研究[D].武汉:华中科技大学,2013.
    [3] 李少昆,索兴梅,白中英,等.基于BP神经网络的小麦群体图像特征识别[J].中国农业科学,2002,35(6):616-620.
    [4] 毛文华,王一鸣,张小超,等.基于机器视觉的苗期杂草实时分割算法[J].农业机械学报,2005,36(1):83-86.
    [5] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[J].Advances in neural information processing systems,2012,25(2):1097-1105.
    [6] KIM J,LEE J K,LEE K M.Deeply-recursive convolutional network for image super-resolution[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Las Vegas,NV,SUA:IEEE,2016:1637-1645.
    [7] SZEGEDY C,LIU W,JIA Y Q,et al.Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Boston,MA,USA:IEEE,2015:1-9.
    [8] KARPATHY A,TODERICI G,SHETTY S,et al.Large-scale video classification with convolutional neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Columbus,Ohio,USA:Curran Associates,Inc.,2014:1725-1732.
    [9] SZEGEDY C,TOSHEV A,ERHAN D.Deep neural networks for object detection[C]//BURAES C J C,BOTTOU L,WELLING M,et al.Advances in neural information processing systems.Lake Tahoe,Nevada,USA:Twenty-Seventh Annual Conference on Neural Information Processing System,2013:2553-2561.
    [10] ALEXANDRE L A.3D object recognition using convolutional neural networks with transfer learning between input channels[C]//MENEGATTI E,MICHAEL N,BERNS K,et al.Intelligent autonomous systems 13.Switzerland:Springer International Publishing,2016:889-898.
    [11] CHEN L C,BARRON J T,PAPANDREOU G,et al.Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Las Vegas,NV,SUA:IEEE,2016:4545-4554.
    [12] SCHULZ H,BEHNKE S.Learning object-class segmentation with convolutional neural networks[C]//Proceedings of European symposium on artificial neural networks,computational intelligence and machine learning.Bruges:ESANN,2012.
    [13] LONG J,SHELHAMER E,DARRELL T.Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Boston,MA,USA:IEEE,2015:3431-3440.
    [14] GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Columbus,Ohio,USA:Curran Associates,Inc.,2014:580-587.
    [15] SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[R].2014.
    [16] HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Las Vegas,NV,SUA:IEEE,2016:770-778.
    [17] HE K,ZHANG X,REN S,et al.Delving deep into rectifiers:Surpassing human-level performance on imagenet classification[R].2015:1026-1034.
    [18] SRIVASTAVA N,HINTON G E,KRIZHEVSKY A,et al.Dropout:A simple way to prevent neural networks from overfitting[J].Journal of machine learning research,2014,15(1):1929-1958.
    [19] DAHL G E,SAINATH T N,HINTON G E.Improving deep neural networks for LVCSR using rectified linear units and dropout[C]//2013 IEEE international conference on acoustics,speech and signal processing.Vancouver,BC,Canada:IEEE,2013:8609-8613.
    [20] The ImageNet database[DB/OL].[2018-01-04].http://image-net.org/.
    [21] VEDALDI A,LENC K.Matconvnet:Convolutional neural networks for matlab[C]//Proceedings of the 23rd ACM international conference on Multimedia.Brisbane,Australia:ACM,2015:689-692.
    [22] YAL?IN H.Phenology monitoring of agricultural plants using texture analysis[C]//Fourth international conference on agro-geoinformatics.Istanbul,Turkey:IEEE,2015:338-342.