文摘
The view-based 3D model descriptors, which represent a 3D model using its projected views, have limitations on viewpoints sampling and computational cost. This paper proposes a new 3D model descriptor, called the Bag-of-View-Words?(BoVW) descriptor, which describes a 3D model by measuring the occurrences of its projected views. An adaptive clustering method is applied to reduce the redundancy of the projected views of each 3D model. A 3D model is represented by a multi-resolution histogram, which is combined by several BoVW descriptors at different levels. The codebook is obtained by unsupervised learning. We also propose a new pyramid matching method for 3D model comparison. Experimental results demonstrated that our method outperforms several existing 3D model descriptors in respect of retrieval precision and computational cost.