Online learning and joint optimization of combined spatial-temporal models for robust visual tracking
文摘
Visual tracking is highly challenged by factors such as occlusion, background clutter, an abrupt target motion, illumination variation, and changes in scale and orientation. In this paper, an integrated framework for online learning of a fused temporal appearance and spatial constraint models for robust and accurate visual target tracking is proposed. The temporal appearance model aims to encapsulate historical appearance information of the target in order to cope with variations due to illumination changes and motion dynamics. On the other hand, the spatial constraint model exploits the relationships between the target and its neighbors to handle occlusion and deal with a cluttered background. For the purposes of reducing the computational complexity of the state estimation algorithm and in order to emphasize the importance of the different basis vectors, a K-nearest Local Smooth Algorithm (KLSA) is used to describe the spatial state model. Further, a customized Accelerated Proximal Gradient (APG) method is implemented for iteratively obtaining an optimal solution using KLSA. Finally, the optimal state estimate is obtained by using weighted samples within a particle filtering framework. Experimental results on large-scale benchmark sequences show that the proposed tracker achieves favorable performance compared to state-of-the-art methods.
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.