集成学习下的图像分析关键问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着互联网的普及和使用,特别是互联网上的图像和视频数据成指数增长的趋势,如何对这些数据进行处理分析和学习,从中得到有用的信息和知识,对完成基本的视觉任务,满足用户需求非常重要。集成学习作为近些年来一种有效的机器学习方法在各个领域都得到了广泛的应用,特别是在机器视觉领域成为了研究的热点。论文在有侧重地总结集成学习在计算机视觉任务中的图像表示、图像分割和图像分类三个重要方面的应用研究现状以及集成学习发展现状和理论的基础上,重点研究了集成学习在这三个视觉任务中的应用。
     论文通过分析三种视觉任务存在的问题,在视觉任务的图像特征表示、图像分割标记以及图像分类上,通过集成学习方式,有效的提高了视觉任务性能。论文取得的成果和主要创新点包括:
     (1)针对图像的特征表示问题,对无标记数据集原始数据分布,提出独立子空间中的特征增量学习方法,得到结构化的特征基元矩阵,形成有效的特征空间表达;同时提出基于AP聚类的样本距离度量方法,并定义样本奇异性,可有效检测原始数据空间分布下的奇异点,实现样本选择。该方法既可用于图像分类,也可用于图像检索。实验结果表明,所提方法可有效发现奇异样本,优于当前主流特征表示与学习方法,同时,也验证了奇异性图像可显著提升分类准确率的结论。此外,本文还对多核多特征集成进行了研究,实验结果表明了多核多特征的集成在场景分类上更有效。
     (2)研究了无监督集成聚类下的图像分割,针对同一种单聚类器对不同问题有不同的表现,不同的单聚类器对同一个问题的分析性能各异问题,采用集成聚类有效提升单聚类器精度和泛化能力。图像分割问题定义为像素级的标记问题,将集成聚类的算法延伸到图像分割处理领域,提出一种基于集成聚类机制的图像分割方法。通过集成多基聚类器的图像分割结果,对产生的多个差异性的基聚类器成员进行对齐,使用加权投票机制,设计共识函数,合并集成分割结果。在UC Berkeley图像库上实验结果表明,集成聚类比单聚类器分割结果从定性上更符合人类视觉感知,定量上也有较大的改进。
     (3)研究了迁移集成机制下的图像分类。采用迁移学习策略,在相关领域或任务间设计共享知识的学习方法,从一个或多个源域任务中抽取知识并将其应用于解决目标域任务,通过再利用现有历史训练数据,实现知识在不同领域、任务和分布间的传播,提升对小样本及训练、测试样本分布不同的学习问题求解的泛化性能。着重研究如何使用迁移学习理论解决小样本和训练、测试样本不同分布条件下的图像分类任务,通过考虑目标域、源域样本分布之间的差异性,提出了对原始多源集成迁移学习算法中的损失函数引入了协变量偏移修正;进一步,通过在每轮迭代中计算源域的可迁移度,进行源域选择并滤除不相关源域,一定程度上抑制了多源域条件下的“负迁移”现象,降低了计算开销。实验结果表明,新方法可有效实现小样本条件下的图像分类任务。
With the development and popularization of Internet technology, the data of images and videoshave increased exponentially. The valid inference and learning algorithms for the computer visiontasks are very important, which would help people to obtain useful information and knowledgeconveniently. Recently, ensemble learning, an effective machine learning method, has been widelyused in various fields and attracted increasing attention especially in computer vision. The state ofthe art of ensemble learning are summarized in this paper. We mainly focus on the three applicationswhich include image representation, segmentation and classification. By analyzing the existingproblems involved in the above tasks, we explore ensemble learning method to improve theperformance of feature representation, segmentation label and object classification. The mainachievements and innovations can be concluded as follows:
     (1) We focus on the problem of image feature representation. A feature incremental learningmethod in independent subspace in the original unlabeled data space is proposed to get structuralfeature element matrix and form effective feature space representation. At the same time, a distancemeasuring method based on AP clustering is provided and the outlier of sample is defined which candetect outlier in the original data space to help the sample selection. This method can be used notonly in image classification, but also in image retrieval. The experiments show that our method canfind outliers effectively and achieve better performance than other popular feature representation,learning methods, and classification. Meanwhile, this paper also study multi-kernel and multi-featureensemble. The experiments show that it can get more effective results on scene classification.
     (2) Unsupervised clustering ensemble for image segmentation is proposed. As the same singlecluster will get different performance under different problems and the different clusters will getdifferent performance under the same problem, the clustering ensemble is used to improve theaccuracy and generalization of clusters. Image segmentation is defined as pixel level labelingproblem and clustering ensemble algorithm is applied to the task of image segmentation. Aclustering ensemble mechanism for image segmentation is provided. By integrating multi-clusterimage segmentation results, the differences of single cluster are aligned. Then, with weighted votingstrategy, the ensemble segmentation results are combined. The experiments on UC Berkeley imagedataset show that the segmentation results by ensemble clustering re consistent with humanperception and better than single cluster. The estimate indicators also have been improved greatly.
     (3) Classfier ensemble by transfer learning is studied to solve the problem of imageclassification. By design the transfer learning strategy, the knowledge is shared among related tasks.It can extract knowledge from one or multi-source domain to solve the problem in target domain.The knowledge in different domains, tasks and distributions can be broadcasted by reusing thetraining data. The generalization of computing can be improved when solving the problems uding few samples or the training and testing data set with different distributions. We focus on how to usetransfer learning method to solve the image classification under the above cases. Covariate shift isincorporated into the loss function to cope with the distribution differences between source domainand target domain. Additionally, the transferability of source domains is evaluated and eliminateirrelevant source domain gradually. Our method enhances the effectiveness in choosing availablesource domain, avoids negative transfer and promotes computational efficiency. Experiment resultsshow the proposed algorithm can achieve higher classification accuracy by using less training data.
引文
[1] R E Schapire. The strategy of weak learnability, Machine Learning.1990,5(2):197-227.
    [2] L K Hansen, L Liisberg, P Salamon. Ensemble methods for handwritten digit recognition. Procthe1992IEEE Workshop on Neural Networks for Signal Processing,1992,333-342.
    [3] Freund Y. Boosting a weak learning algorithm by majority. Information and Computation,1995,121(2):256-285.
    [4] K J Cherkauer. Human expert level performance on a scientific image analysis task by a systemusing combined artificial neural networks. Proc the13th AAAI Workshop on IntegratingMultiple Learned Models for Improving and Scaling Machine Learning Algorithms,1996,15-21.
    [5] Breiman L. Bagging predictors, Machine Learning,1996,24(2):123-140.
    [6] Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an apPlacation to boosting. Journal of Computer and System Sciences,1997,55(1):119-139.
    [7] Y Shinmshoni, N Intrator. Classification of seismic signals by integrating ensembles of neuralnetworks. IEEE Trans Signal Processing,1998,46(5):1194-1201.
    [8] R E Schapire, Y Singer. BoosTexter: A boosting-based system for text categorization. MachineLearning,2000,39(2-3):135-168.
    [9] Z H Zhou, J Wu, W Tang. Ensembling neural networks: many could be better than all.Artificial Intelligence,2002,137(1-2):239-263.
    [10]周志华,陈世福.神经网络集成.计算机学报,2002,25(1):1-8.
    [11]唐伟,周志华.基于Bagging的选择性聚类集成.软件学报,2005,16(4):496-502.
    [12] G Ridgeway. Looking for lumps: boosting and bagging for density estimation. ComputationalStatistics and Data Analysis,2002,38(4):379-392.
    [13] C A Shipp, L I Kuncheva. Relationships between Combination Methods and Measures ofDiversity in Combining Classifiers. Information Fusion,2002,3(2):135-148.
    [14] L I Kuncheva, J Christopher Whitaker. Measures of Diversity in Classifier Ensembles andTheir Relationship with the Ensemble Accuracy. Machine Learning,2003,51(2):181-207.
    [15] G Valentini, T G Dietterich. Bias-Variance Analysis of Support Vector Machines forthe Development of SVM-Based Ensemble Methods. Journal of Machine Learning Research,2004,5:725-775.
    [16] D Ruta, Bogdan Gabrys. Classifier Selection for Majority Voting. Information Fusion,2005,6(1):63-81.
    [17] A Chandra, Xin Yao. Evolving Hybrid Ensembles of Learning Machines for BetterGeneralisation. Neurocomputing,2006,69(7):686-700.
    [18] A M P Canuto, et al. Investigating the Influence of the Choice of the Ensemble Members inAccuracy and Diversity of Selection-Based and Fusion-Based Methods for Ensembles. PatternRecognition Letters,2007,28(4):472-486.
    [19] B E Hansen. Least Squares Model Averaging. Econometrica,2007,75(4):1175-1189.
    [20] C Domeniconi, M Al-Razgan. Weighted cluster ensembles: Methods and analysis. ACMTransactions on Knowledge Discovery from Data,2009,2(4).
    [21] P Bühlmann, T Hothorn. Twin Boosting: Improved feature selection and prediction. Statisticsand Computing,2010,20(2):119-138.
    [22] C Shen, H Li. On the dual formulation of boosting algorithms. IEEE Transactions on PatternAnalysis and Machine Intelligence,2010,32(12):2216-2231.
    [23] Kent Shi, Song-Chun Zhu. Mapping Natural Image Patches by Explicit and Implicit Manifolds,2007IEEE Computer Society Conference on Computer Vision and Pattern Recognition,CVPR'07.
    [24] Peter Gehler, Sebastian Nowozin. On Feature Combination for Multiclass Object Classification,2009IEEE12th International Conference on Computer Vision,2009,221-228.
    [25] Gustavo Carneiro. The Automatic Design of Feature Spaces for Local Image Descriptors usingan Ensemble of Non-linear Feature Extractors, IEEE Conference on Computer Vision andPattern Recognition,2010,3509-3516.
    [26] Yen-Yu Lin, Tyng-Luh Liu, Chiou-Shann Fuh. Local ensemble kernel learning for objectcategory recognition.2007IEEE Computer Society Conference on Computer Vision andPattern Recognition, CVPR'07.
    [27] Jose M A′lvarez, Theo Gevers, Antonio L′opez. Learning Photometric Invariance fromDiversified Color Model Ensembles. Int J Comput Vis,2010,90:45–61.
    [28] Mohammad Ali Bagheri, Gholam Ali Montazer, Ehsanollah Kabir. A subspace approach toerror correcting output codes. Pattern Recognition Letters,2013,34:176–184.
    [29] Natalia Larios, Hongli Deng, Wei Zhang, et al. automated insect identification throughconcatenated histograms of local appearance features: feature vector generation and regiondetection for deformable objects. Machine Vision and Applications,2008,19:105–123.
    [30] Hong Pan, Yaping Zhu, Liangzheng Xia. Efficient and accurate face detection usingheterogeneous feature descriptors and feature selection. Computer Vision and ImageUnderstanding,2013,117:12–28.
    [31] Loris Nanni, Alessandra Lumini. Heterogeneous bag-of-features for object/scene recognition,Applied Soft Computing,2013,13:2171–2178.
    [32] Thiago H H, et al. Fusion of feature sets and classifiers for facial expression recognition. ExpertSystems with Applications,2013,40:646–655.
    [33] Yina Han, Kunde Yang, Yuanliang Ma, Guizhong Liu. Localized Multiple Kernel Learning viaSample-wise Alternating Optimization Localized. IEEE Transactions On Cybernetics(accepted).
    [34] Prudhvi Gurram, Heesung Kwon. Sparse Kernel-Based Ensemble Learning With FullyOptimized Kernel Parameters for Hyperspectral Classification Problems. IEEE Transactions OnGeoscience And Remote Sensing,2013,51(2):787-802.
    [35] Jianhua Jia, Xuan Xiao, Bingxiang Liu, Licheng Jiao. Bagging-based spectral clusteringensemble selection, Pattern Recognition Letters,2011,32:1456–1467.
    [36] Jamie Shotton, John Winn, Carsten Rother. TextonBoost for Image Understanding:Multi-Class ObjectRecognition and Segmentation by Jointly Modeling Texture, Layout, andContext. Int J Comput Vis,2009,81:2–23.
    [37] Gabriel J Brostow, et al. Segmentation and Recognition Using Structure from Motion PointClouds. ECCV2008.
    [38] Yong Jae Lee, Kristen Grauman. Collect-Cut: Segmentation with Top-Down Cues Discoveredin Multi-Object Images.2010IEEE Conference on Computer Vision and Pattern Recognition,2010,3185-3192.
    [39] Wei Wang, Xiaolei Huang. Distance Guided Selection of the Best Base Classifier in anEnsemble with Application to Cervigram Image Segmentation, IEEE Conference on ComputerVision and Pattern Recognition,2009,172-179.
    [40] Antoni B Chan, Nuno Vasconcelos. Modeling, Clustering, and Segmenting Video withMixtures of Dynamic Textures. IEEE Transactions On Pattern Analysis And MachineIntelligence,2008,30(5).
    [41] Bingxiang Liu, Jianhua Jia. Texture image segmentation based on spectral clustering ensemblevia Markov random field.2011IEEE International Conference on Computer Science andAutomation Engineering,2011,1:550-554
    [42] Franek L, et al. Image Segmentation Fusion Using General Ensemble Clustering Methods,10th Asian Conference on Computer Vision,2010.
    [43] Takahashi K, Nishiwaki D. A class-modular GLVQ ensemble with outlier learning forhandwritten digit recognition.7th International Conference on Document Analysis andRecognition,2003,268-272.
    [44] Maree R, Geurts P, et al. Random subwindows for robust image classification,2005IEEEComputer Society Conference on Computer Vision and Pattern Recognition,2005,1:34-40.
    [45] Lin Yen-Yu,Liu Tyng-Luh,Fuh Chiou-Shann. Local ensemble kernel learning for objectcategory recognition,2007International Conference on Computer Vision And PatternRecognition,2007,1-8:875-882
    [46] Moosmann F, Nowak E, Jurie F. Randomized clustering forests for image classification. IEEETransactions On Pattern Analysis And Machine Intelligence,2007,30(9):1632-1646.
    [47] Pauly O, Mateus D, Navab N. STARS: A New Ensemble Partitioning Approach.2011IEEEInternational Conference on Computer Vision Workshops,2011.
    [48] Song Xiangfa, Jiao L C, Yang Shuyuan, et al. Sparse coding and classifier ensemble basedmulti-instance learning for image categorization. Signal Processing,2013,93(1):1-11.
    [49] Filipczuk P, Krawczyk B, Wozniak M. Classifier ensemble for an effective cytological imageanalysis. Pattern Recognition Letters,2013,34(14):1748-1757.
    [50] Breiman L. Random forests. Machine learning,2001.45(1): p.5-32.
    [51] Tsymbal, A, M Pechenizkiy, P Cunningham, Dynamic integration with random forests.Machine Learning: ECML2006,2006,801-808.
    [52] Chen, H T, T L Liu, C S Fuh, Segmenting highly articulated video objects with weak-priorrandom forests. Computer Vision CECCV2006,2006,373-385.
    [53] Bosch A, et al. Image classification using random forests and ferns. IEEE11th InternationalConference on Computer Vision,2007.
    [54] Shotton J, M Johnson, R Cipolla. Semantic texton forests for image categorization andsegmentation. IEEE Conference on Computer Vision and Pattern Recognition,2008.
    [55] Lampert H, Nickisch H and Harmeling S. Learning to detect unseen object classes bybetween-class attribute transfer. Proceedings of Computer Vision and Pattern Recognition,2009,951–958.
    [1] Dietterich T G. Machine learning research: four current directions. AI Magazine,1997,18(4):97-136.
    [2] A J C Sharkey. Multi-net systems. In Combining Artificial Neural Nets: Ensemble and ModularMulti-Net Systems, pages1–27. Spring-Verlag,1999.
    [3] G Brown, J Wyatt, R Harris, X Yao. Diversity creation methods: a survey and categorisation.Information Fusion,6(1):5–20,2005.
    [4] L Breiman. Bagging predictors. Machine Learning,1996,24(2):123–140.
    [5] Y Freund and R Schapire. Experiments with a new boosting algorithm. In Proceedings of XIIIInternational Conference on Machine Learning,1996,148–156.
    [6] N C Oza, K Tumer. Classifier ensembles: select real-world applications. Information Fusion,,2008,9(1):4–20
    [7] D Greene, A symbal, et al. Cunningham. Ensemble clustering in medical diagnostics. InProceedings of the IEEE Symposium on Computer-Based Medical Systems,2004,570–575.
    [8] L K Hansen, P.Salamon. Neural network ensembles.IEEE Transactions on Pattern Analysis andMachine Intelligence,1990,12(10):993-1001.
    [9] M Prior, T Windeatt. Over-fitting in ensembles of neural networks classifiers within ecocframeworks. In Proceedings of International Workshop on Multiple Classifier System,2005,286–295.
    [10] G Valentini,T Dietterich. Low bias bagged support vector machines. In Proc.International Conf.on Machine Learning,2003,776–783.
    [11] A Tsymbal, M Pechenizkiy, P Cunningham. Diversity in search strategies for ensemble featureselection. Information Fusion,2005,6(1):83–98.
    [12] T K Ho. The random subspace method for constructing decision forests. IEEE Transactions onPattern Analysis and Machine Intelligence,1998,20(8):832–844.
    [13] T G Dietterich, G Bakiri. Solving multiclass learning problems via error-correting output codes.Journal of Artificial Intteligence Research,1995,2:263–286.
    [14] L Breiman. Randomizing outputs to increase prediction accuracy. Machine Learning,2000,40(3):229–242.
    [15] D Ruta, B Gabrys. Classifier selection for majority voting. Information Fusion,2005,6(1):163–168.
    [16] X Zhu, X Wu, Y Yang. Dynamic selection for effective mining from noisy data streams. InProceedings of Fourth IEEE International Conference on Data Mining,2004,305–312.
    [17] G Valentini, T G Dietterich. Bias-variance analysis and ensembles of svm. Multiple ClassifierSystem,2002,222–231.
    [18] A J C Sharkey, N E Sharkey. The "test and select" approach to ensemble combinationMultipleClassifier System,2000,30–44.
    [19] P V W Radtke. Classification Systems Optimization with Multi-Objective EvolutionaryAlgorithms. PhD thesis, Ecole de Technologie Superieure–Universite du Quebec,2006.
    [20] G Tremblay, R Sabourin, P Maupin. Optimizing nearest neighbour in random subspaces using amulti-objective genetic algorithm. In Proceedings of International Conference PatternRecognition,2004,208–211.
    [21] K Woods. Combination of multiple classifiers using local accuracy estimates. IEEETransactions on Pattern Analysis and Machine Intelligence,1997,19(4):405–410.
    [22] K Deb. Unveiling innovative design principles by means of multiple conflicting objectives.Engineering Optimization,2003,35(5):445–470.
    [23] L Didaci, G Giacinto, et al. A study on the performances of the dynamic classifier selectionbased on local accuracy estimation. Pattern Recognition,2005,28:2188–2191.
    [24] K Deb, T Goel. Controlled elitist non-dominated sorting genetic algorithms for betterconvergence. In Proceedings of First Evolutionary Multi-Criterion Optimization,2001,67–81.
    [25] V Gunes, M Menard, P Loonis. A multiple classifier system using ambiguity rejection forclustering-classification cooperation. International Journal of Uncertainty, Fuzziness andKnowledge-Based Systems,2000,8(6):747–762.
    [26] L I Kuncheva, C J Whitaker. Measures of diversity in classifier ensembles an their relationshipwith the ensemble accuracy. Machine Learning,2002,51:181–207.
    [27] J Reunanen. Overfitting in making comparisons between variable selection methods. Journal ofMachine Learning Research,2003,3:1371–1382.
    [28] CY Suen and Louisa Lam. Multiple classifier combination methodologies for different outputlevels. In First International Workshop on Multiple Classifier Systems,2000,52–66.
    [29] L I Kuncheva. Combining Pattern Classifiers: Methods and Algorithms. Jhon Wiley&Sons,LTD, USA,2004.
    [30]高隽.人工神经网络原理及仿真实例.北京:机械工业出版社,2003.
    [31]高隽.智能信息处理方法导论.北京:机械工业出版社,2004.
    [32] Breiman L. Bagging predictors, Machine Learning,1996,24(2):123-140.
    [33] Breiman L. Random forests. Machine learning,2001,45(1):5-32.
    [34] Gislason, P O, J A Benediktsson, J R Sveinsson, Random Forests for land cover classification.Pattern recognition letters,2006,27(4):294-300.
    [35] Ham J, et al. Investigation of the random forest framework for classification of hyperspectraldata. Geoscience and Remote Sensing, IEEE Transactions on,2005,43(3):492-501.
    [36] Salford, http://salford-systems.com/products/randomforests/faq.html,2011.
    [37]高隽,谢昭.图像理解理论与方法.北京:科学出版社,2009.
    [38]方育柯.集成学习理论研究及其在个性化推荐中的应用.电子科技大学博士学位论文,2011.
    [1] Wu M, Ye J. A small sphere and large margin approach for novelty detection using training datawith outliers. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(11):2088-2092.
    [1] Wu M, Ye J. A small sphere and large margin approach for novelty detection using training datawith outliers. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(11):2088-2092.
    [2] Frey B J, Dueck D. Clustering by passing messages between data points. Science,2007,315(5814):972-976.
    [3] Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant textureclassification with local binary patterns. IEEE Trans on Pattern Analysis and MachineIntelligence,2002,24(7):971-987
    [4] Lowe D G. Object recognition from local scale-invariant features. Proceeding of IEEEInternational Conference on Computer Vision,1990,1150-1157.
    [5] Ranzato M, Huang Fujie, Boureau Y, et al. Unsupervised learning of invariant featurehierarchies with applications to object recognition. Proceedings of CVPR. Minneapolis,Minnesota, USA: IEEE Computer Society,2007:31-39.
    [6] Kavukcuoglu K, Ranzato M, et al. Learning invariant features through topographic filter mapsProceedings of CVPR. Miami, Florida, USA: IEEE Computer Society,2009:1605-1612.
    [7] Oliva A, Torralba A. Modeling the shape of the scene a holistic representation of the spatialenvelope. International Journal in Computer Vision,2001.42(3):145-175.
    [8] Oliva A, Torralba A. Building the gist of a scene: the role of global image features inrecognition. Progress in Brain Research: Visual Perception,2006,155:23-36.
    [9] Lazebnik S, Schmid C. Beyond Bags of features: Spatial pyramid matching for recognizingnatural scene categories. IEEE Conf. on Computer Society,2006:2169-2178.
    [10] A Hyv rinen, P Hoyer. Emergence of Phase and Shift Invariant Features by Decomposition.Neural Computation,2000,12:1705-1720.
    [11] A Hyv rinen. Statistical models of natural images and cortical visual representation. Topics inCognitive Science,2010,2:251-264.
    [12] T Kohonen. Emergence of invariant-feature detectors in the adaptive-subspace self-organizingmaps. Biological Cybernetics,1996,75:281-291.
    [13] Lazebnik S, Schmid C. Beyond Bags of features: Spatial pyramid matching for recognizingnatural scene categories. IEEE Conf. on Computer Society,2006:2169-2178.
    [14]汪洪桥,孙富春等.多核学习方法.自动化学报,2010,36(8):1037-1050.
    [15] A Rakotomamonjy, F Bach, Canu S, Grandvalet Y. Simple MKL. The Journal of MachineLearning Research,2008,9(11):2491-2521.
    [16] Li F F, Perona P. A bayesian hierarchical model for learning natural scene categories. IEEEComputer Society,2005:524-531.
    [17] Yang J C, Yu K, Gong Y H, Thomas H. Linear Spatial Pyramid Matching Using Sparse Codingfor Image Classification. IEEE Computer Society,2009,1794-1801.
    [1]林开颜,吴军辉,徐立鸿.彩色图像分割方法综述.中国图象图形学报,2005,10(1):1-10.
    [2] Changyin Sun, Chenghong Wang, Su Song, Yifan Wang. A Local Approach of AdaptiveAffinity Propagation Clustering for large scale data. Proceedings of International JointConference on Neural Networks,2009,14-19.
    [3] Frey B J, Dueck D. Clustering by Passing Messages Between Data Points. Science,2007,315(5814):972-976.
    [4] Hance G A, Umbaugh S E, Moss R H, et al. Unsupervised color image segmentation withapp lication to skin tumor borders. IEEE Engineering inMedicine and Biology,1996,15(1):104-111.
    [5] G. Valentini and F. Masulli,"Ensembles of learning machines," in Neural Nets WIRNVietri-02, Series Lecture Notes in Computer Sciences, M. Marinaro and R. Tagliaferri,Eds.: Springer-Verlag, Heidelberg (Germany),2002,. Invited Review.
    [6]高隽,谢昭.图像理解理论与方法.北京:科学出版社,2009.
    [7] Pablo Arbelaez, Charless Fowlkes, David Martin.http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/,2007.
    [8] Martin, C. Fowlkes, D. Tal, and J. Malik. A Database of Human Segmented NaturalImages and Its Application to Evaluating Segmentation Algorithms and MeasuringEcological Statistics. International Conference on Computer Vision,2001,2:416-423.
    [9] David Martin, Charless Fowlkes, Doron Tal, Jitendra Malik. A Database of HumanSegmented Natural Images and its Application to Evaluating Segmentation Algorithms andMeasuring Ecological Statistics.2001: Computer Science Division (EECS) University ofCalifornia Berkeley.
    [10] E. H. Moore. On the reciprocal of the general algebraic matrix. Bulletin of the AmericanMathematical Society,1990,(26):394-395.
    [11] Lazarevic and Z. Obradovic. Effective pruning of neural network classifier ensemble.Neural Networks,2001,2:796-801.
    [12] D. Hernandez-Lobato, G. Martinez-Munoz, A. Suarez. Statistical instance-based pruning inensembles of independent classifiers. IEEE Transactions on Pattern Analysis and MachineIntelligence,2009,31(2):364-369.
    [1] Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and DataEngineering,2010,22(10):1345-1359.
    [2] Dai W, Yang Q, Xue G.R, et al. Boosting for transfer learning. Proceedings of theInternational Conference on Machine Learning,2007.193-200.
    [3] Yao Y, Doretto G. Boosting for transfer learning with multiple sources. Proceedings ofComputer Vision and Pattern Recognition,2010,1855-1862.
    [4] Qi G J, Aggarwal C, Rui Y, et al. Towards cross-category knowledge propagation forlearning visual concepts. Proceedings of Computer Vision and Pattern Recognition,2011.897-904.
    [5] Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning from unlabeled data.Proceedings of the International Conference on Machine Learning,2007.759-766.
    [6] Pan S J, Kwok J T and Yang Q. Transfer learning via dimensionality reductionProceedings of AAAI Conference on Artificial Intelligence,2008,677-682.
    [7] Pan S J, Tsang I W, Kwok J T, et al. Domain adaptation via transfer component analysis.Proceedings of International Joint Conference on Artificial Intelligence,2009,1187–1192.
    [8] Lawrence N D, Platt J C. Learning to learn with the Informative Vector Machine.Proceedings of Annual Conference on Neural Information Processing Systems,2004,625-632.
    [9] Mihalkova L, Huynh T and Mooney R J. Mapping and revising Markov Logic Networksfor transfer learning. Proceedings of AAAI Conference on Artificial Intelligence,2007,608-614.
    [10] Dai W, Xue G R, Yang Q, et al. Co-clustering based classification for out-of-domaindocuments. Proceedings of the Eleventh ACM SIGKDD International Conference onKnowledge Discovery in Data Mining,2007,210-219.
    [11] Bart E, Ullman S. Cross-generalization: learning novel classes from a single example byfeature replacement. Proceedings of Computer Vision and Pattern Recognition,2005,672–679.
    [12] Fergus R, Bernal H, Weiss Y, et al. Semantic label sharing for learning with manycategories. Proceedings of European Conference on Computer Vision,2010,762-775.
    [13] Lampert H, Nickisch H and Harmeling S. Learning to detect unseen object classes bybetween-class attribute transfer. Proceedings of Computer Vision and Pattern Recognition,2009,951–958.
    [14] Torralba A, Murphy K P and Freeman W T. Sharing visual features for multiclass andmultiview object detection. Pattern Analysis and Machine Intelligence,2007,29(5):854–869.
    [15] Feifei L, Perona P and Fergus R. One-shot learning of object categories. Pattern Analysisand Machine Intelligence,2006,28(4):594-611.
    [16] Freund Y, Schapire R. A decision-theoretic generalization of on-line learning and anapplication to Boosting. Journal of Computer and System Sciences,1997,55(1):119-139.
    [17] Sugiyama M, Krauledat M and Müller K R. Covariate shift adaptation by importanceweighted cross validation. Journal of Machine Learning Research,2007,8(5):985-1005.
    [18] Gretton A, Smola A, Huang J, et al. Covariate Shift by Kernel Mean Matching. Datasetshift in machine,2009,131-160.
    [19] Pang J, Huang Q, Yan S, et al. Transferring boosted detectors towards viewpoint and sceneadaptiveness. IEEE Transactions on Image Processing,2011,20(5):1388-1400.
    [20] Li G, Qin L, Huang Q, et al. Treat samples differently: Object tracking withsemi-supervised online CovBoost. Proceedings of International Conference on ComputerVision, IEEE,2011.627-634.
    [21] Griffi G, Holub A.D and Perona, P. The Caltech-256, Caltech Technical Report.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700