基于视角信息嵌入的行人重识别
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Person Re-Identification Based on View Information Embedding
  • 作者:毕晓君 ; 汪灏
  • 英文作者:Bi Xiaojun;Wang Hao;College of Information and Communication Engineering, Harbin Engineering University;
  • 关键词:机器视觉 ; 光计算 ; 行人重识别 ; 视角信息嵌入 ; 深度残差卷积神经网络 ; 深度可分离卷积
  • 英文关键词:machine vision;;optics in computing;;person re-identification;;perspective information embedding;;deep residual convolution neural network;;depthwise separable convolution
  • 中文刊名:GXXB
  • 英文刊名:Acta Optica Sinica
  • 机构:哈尔滨工程大学信息与通信工程学院;
  • 出版日期:2019-03-19 09:09
  • 出版单位:光学学报
  • 年:2019
  • 期:v.39;No.447
  • 语种:中文;
  • 页:GXXB201906032
  • 页数:10
  • CN:06
  • ISSN:31-1252/O4
  • 分类号:262-271
摘要
提出一种基于视角信息嵌入的行人重识别模型。结合行人图像视角朝向特点,对PSE (pose-sensitive embedding)网络结构进行了优化。首先将PSE特征向量融合部分由特征的融合改成更符合不同视角特征空间性质的三个视角单元特征向量的拼接;其次视角单元从骨架网络更浅层的blocks-3进行分离,增加三个视角单元特征空间的差异性;最后利用改进的深度可分离卷积,设计了一个深度可分离模块,对视角单元进一步进行提取特征,防止模型参数过大的同时提高网络非线性能力,从而提高网络的泛化能力。利用Market1501、Duke-MTMC-reID和MARS数据集对所提的算法进行有效性验证实验,结果表明所提的改进方法取得了更好的识别效果。
        In this study, we propose a person re-identification model based on view information embedding. In particular, a pose-sensitive embedding(PSE) network structure is optimized based on the perspective towards characteristics of pedestrian images. First, the fusion part of the PSE feature vector is changed from feature fusion into the concatenation of the feature vectors of three view units, which is considerably reasonable for utilizing different view feature spaces. Second, the view units are separated from the shallow blocks-3 of the skeleton network, which improves the difference of the view feature space. Finally, we design a depthwise separable module based on the improved depth separable convolution to extract features of perspective units, preventing the model parameters from being considerably large and improving the network nonlinearity. The results of the validation experiments conducted using the Market1501, Duke-MTMC-reID and MARS datasets demonstrate that the proposed method can achieve a better recognition accuracy when compared with several advanced algorithms.
引文
[1] Guo P Y,Su A,Zhang H L,et al.Online mixture of random Na?ve Bayes tracker combined texture with shape feature[J].Acta Optica Sinica,2015,35(3):0315002.郭鹏宇,苏昂,张红良,等.结合纹理和形状特征的在线混合随机朴素贝叶斯视觉跟踪器[J].光学学报,2015,35(3):0315002.
    [2] Sun X W,Xu Q S,Cai Y,et al.Sea sky line detection based on edge phase encoding in complicated background[J].Acta Optica Sinica,2017,37(11):1110002.孙熊伟,徐青山,蔡熠,等.基于边缘相位编码的复杂背景下海天线检测[J].光学学报,2017,37(11):1110002.
    [3] Gray D,Tao H.Viewpoint invariant pedestrian recognition with an ensemble of localized features[M]//Forsyth D,Torr P,Zisserman A.Lecture Notes in Computer Science.Berlin,Heidelberg:Springer,2008,5302:262-275.
    [4] Prosser B J,Zheng W S,Gong S,et al.Person re-identification by support vector ranking[C]//Proceedings of the British Machine Vision Conference.August 31-September 3,2010,Aberystwyth,UK.Durham,England,UK:BMVA Press,2010:21.
    [5] Chen Y,Fan R S,Wang J X,et al.Cloud detection of ZY-3 satellite remote sensing images based on deep learning[J].Acta Optica Sinica,2018,38(1):0128005.陈洋,范荣双,王竞雪,等.基于深度学习的资源三号卫星遥感影像云检测方法[J].光学学报,2018,38(1):0128005.
    [6] Li W,Zhao R,Xiao T,et al.DeepReID:deep filter pairing neural network for person re-identification[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition,June 23-28 ,2014,Columbus,OH,USA.New York:IEEE,2014:152-159.
    [7] Geng M Y,Wang Y M,Xiang T,et al.Deep transfer learning for person reidentification[EB/OL].(2016-12-22)[2018-12-20].https://arxiv.org/abs/1611.05244.
    [8] Zheng L,Huang Y J,Lu H C,et al.Pose invariant embedding for deep person re-identification[EB/OL].(2017-01-26)[2018-02-20].https://arxiv.org/abs/1701.07732.
    [9] Zhang X,Luo H,Fan X,et al.AlignedReID:surpassing human-level performance in person re-identification[EB/OL].(2018-01-31)[2018-02-20].https://arxiv.org/abs/1711.08184.
    [10] Saquib Sarfraz M,Schumann A,Eberle A,et al.A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,June 18-23,2018,Salt Lake City,UT,USA.New York:IEEE,2018:420-429.
    [11] Chen Y B,Zhu X T,Gong S G.Person re-identification by deep learning multi-scale representations[C]//2017 IEEE International Conference on Computer Vision Workshops (ICCVW),October 22-29,2017,Venice,Italy.New York:IEEE,2017:2590-2600.
    [12] Liu Y,Yan J J,Ouyang W L.Quality aware network for set to set recognition[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),July 21-26,2017,Honolulu,HI,USA.New York:IEEE,2017:4694-4703.
    [13] Cao K D,Rong Y,Li C,et al.Pose-robust face recognition via deep residual equivariant mapping[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,June 18-23,2018,Salt Lake City,UT,USA.New York:IEEE,2018:5187-5196.
    [14] Howard A G,Zhu M L,Chen B,et al.MobileNets:efficient convolutional neural networks for mobile vision applications[EB/OL].(2017-04-17)[2018-12-21].https://arxiv.org/abs/1704.04861.
    [15] Hu J,Shen L,Sun G.Squeeze-and-excitation networks[EB/OL].(2017-10-25)[2018-12-21].https://arxiv.org/abs/1709.01507.
    [16] Yu Q,Chang X,Song Y Z,et al.The devil is in the middle:exploiting mid-level representations for cross-domain instance matching[EB/OL].(2018-04-04)[2018-12-21].https://arxiv.org/abs/1711.08106.
    [17] Zheng L,Shen L Y,Tian L,et al.Scalable person re-identification:a benchmark[C]//2015 IEEE International Conference on Computer Vision (ICCV),December 7-13,2015,Santiago,Chile.New York:IEEE,2015:1116-1124.
    [18] Ristani E,Solera F,Zou R,et al.Performance measures and a data set for multi-target,multi-camera tracking[M]//Hua G,Jégou H.Lecture Notes in Computer Science.Cham:Springer,2016,9914:17-35.
    [19] Zheng L,Bie Z,Sun Y F,et al.MARS:a video benchmark for large-scale person re-identification[M]//Leibe B,Matas J,Sebe N,et al.Computer Vision-ECCV 2016.Cham:Springer,2016,9910:868-884.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700