用户名: 密码: 验证码:
多视度量和回归学习方法及应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
距离度量和回归学习在机器学习、模式识别和计算机视觉等领域起着至关重要的作用。许多实际任务,如图像的聚类、分类、基于内容的图像标注和检索,性能的关键取决于适合的距离度量函数的选择。而回归学习对于解决度量学习、以及图像处理等问题提供了最有效的工具和手段。因此,有关度量和回归学习的研究具有重要意义和广泛价值。然而,绝大多数度量和回归学习算法都是针对单一数据集合。伴随着因特网的飞速发展和数码摄像设备的日益普及,数据通常由多个不同的信息源或不同的特征表示构成,呈现出多模态的特性。为了有效的分析和处理多模态数据,本文主要探讨了多视度量和回归学习问题。目前多视度量和回归学习工作刚刚起步,已有工作全部基于对数据的全局建模。而近年来研究者们发现,与全局方法相比,局部化分析和构建预测函数通常能够取得更低的误差,从而具有更好的鲁棒性和灵活性。此外,局部学习能够充分提升算法处理复杂问题的能力。基于此,本文研究了局部和全局相结合的多视度量和回归学习方法,并将其应用在实际的应用问题中。具体的,本文研究内容分为四个部分:
     1.提出了一种全局一致局部平滑的多视度量学习算法。所提出的算法通过学习特定的共享隐特征空间,间接的建立起多视观测数据之间的联系。整个学习分解为两个基本阶段:全局一致性共享隐特征空间学习和局部平滑多视度量学习。阶段一,基于谱图理论,对于全部有标记样本对,得到其在低维空间的表示,且该低维空间被视为共享隐特征空间;阶段二,利用正则化的局部线性回归,对于未标记样本和测试样本,学习从输入空间到共享隐特征空间的局部映射函数。其中,图拉普拉斯正则化被引入使得学习到的局部度量函数在整个数据空间保持平滑变化。最终,上述两个阶段都形式化为凸最优化问题,存在闭合解,且求解方法简单。姿态和表情对齐的实验证明了所提出方法的有效性。
     2.提出了特定实例典型相关分析法。所提出的方法借助经典的统计学习方法:典型相关分析,并将其进一步发展,提出了基于特定实例的典型相关分析法,使其同时具有局部和非线性两种基本特性。与上一个工作不同,所提出的方法不需要采用两阶段学习,因此建立了局部和全局相结合的多视度量学习统一框架。首先,探究了基于最小平方回归的典型相关分析法求解。然后,借助最小平方回归学习框架,沿着数据流形的平滑曲线计算特定样本的局部映射函数,从而近似拟合整个数据空间的非线性分布。此外,为了更好的挖掘并利用未标记样本的信息,本文还进一步讨论了其在半监督情况的扩展。最终,对所建立的优化目标采用交替最优化求解方法,并在联合凸最优化的理论保证下取得全局最优解。
     3.为了进一步应对大数据问题,提出了参数化的局部多视海明距离度量学习算法。首先定义了离散化的局部多模态哈希映射函数,将数据从原始输入空间映射到二值离散空间,并利用在离散化空间的海明距离作为最终的距离度量。其次,为了平衡局部性和计算的有效性,本文对局部哈希映射函数做了近似,将其参数化表示为一组锚点所对应的映射函数的线性加权组合。同时从理论上给出了近似局部哈希映射的错误上界。接着,建立了局部和全局相结合的优化目标,并利用共轭梯度法和顺序学习过程进行有效求解。在跨媒体检索问题中的实验结果证明了所提出的方法能够更好的建模大数据的复杂结构,并取得更高的查询精度。
     4.除了对多数据集学习,本文还进一步探讨了对单数据集建立局部和全局相结合的多视模型。并面向图像去噪问题,提出了基于多视核回归的渐进图像去噪方法。首先对目标图像进行多尺度的表示,然后由粗到细采用渐进的方式对图像进行去噪。一方面,在每一个尺度内,采用了基于隐式核的图拉普拉斯最小平方回归模型,使其同时最小化在可度量样本上的最小平方误差,同时保持整个图像数据空间的流形结构(全局结构)。另一方面,在连续的两个尺度间,采用了基于显式核的图拉普拉斯最小平方回归,使得局部结构规律被学习并且从粗尺度逐渐传播至细尺度图像。其中,本文对尺度内和尺度间的相关性采用了统一的目标函数,但两种不同的优化方式,使得图像的全局结构信息和局部规律特性更好的被挖掘和结合起来。实验结果证明了所提出的方法在图像去噪问题中取得到较之主流算法相当甚至更好的性能。
Distance metric and regression learning play an important role in machine learning,pattern recognition and computer vision. Many tasks such as image classification, clus-tering, content-based image annotation and retrieval depend critically on the choice of anappropriate distance metric. Regression learning provide an effective tool for distancemetric learning and image processing problems. So, the distance metric and regressionlearning is important in theory and application. However, most of the traditional algo-rithms in metric and regression learning only focus on the problem on the single dataset.With the rapid development of internet and the rising popularity of digital cameras, dataoften have different observations or descriptions, which present the multimodal property.In order to analysis and process multimodal data effectively, this paper will focus on mul-tiview metric and regression learning problems. Nowadays the work on multiview metricand regression learning has just started, all existing work are global methods. Recently,researchers observe that, compared with global methods, the estimation error rate is alsolocalized by localizing the prediction function, and thus it appears to be more robust andflexible. In addition, learning in local manner can sufficiently boost the capacity powers.In this thesis, we study multiview metric and regression learning with local and globalcombination, and explore their applications. The contents of the thesis can be dividedinto four sections that are detailed as follows:
     1. We propose a two stage multiview metric learning with global consistency andlocal smoothness. To study the shared latent space of the multi-view observations, theconnections between data from different views are implicity established. The learningprocess is decomposed into a two-stage: In the first stage, based on the spectral graphtheory, our method get the common low-dimensional embeddings for all labeled corre-spondence pairs; In the second stage, based on regularized local linear regression, ourmethod learn the relationships between input space of each observation and the shared la-tent space for unlabeled and test data. Furthermore, graph-Laplacian regularization termis incorporated to keep the learned metric vary smoothly. The proposed method formu-lates global and local metric learning as two convex optimization problems, which couldbe efficiently solved with closed-form solutions. Experimental results with application to pose and expression alignment demonstrate the effectiveness of the proposed method.
     2. We propose a unified framework for multiview metric learning via instance-specific canonical correlation analysis. Based on canonical correlation analysis (CCA),we propose instance-specific canonical correlation analysis, which achieves locality andnonlinearity at the same time. Unlike the work above, the proposed method does not needa two stage learning process, and thus establish a unified framework. First, we propose aleast squares solution for CCA which will set the stage for the proposed method. Second,based on the framework of least squares regression, CCA is extended to approximates thenonlinear data by computing the instance specific projections along the smooth curve ofthe manifold. Furthermore, the proposed method can be extended to semi-supervised set-ting by exploiting the unlabeled data to further improve the performance. The optimiza-tion problem is proved to be jointly convex and could be solved efficiently by alternatingoptimization. And the globally optimal solutions could be achieved with theoretical guar-antee.
     3. To confront with the big data problem, we propose parametric local multiviewhamming distance metric learning First, discrete local multimodal hashing functions aredefined to project data from input features to binary codes. And the hashing distancein the discrete space is computed. To balance locality and computational efficiency, wepropose to approximate the local hashing function for each point as a linear weightedcombination of a small set of projection basis associated with a set of anchor points. Andthe error bound for approximated local hashing projection is verified. Then the objectivefunction with local and global combination is established, and conjugate gradient methodand sequential learning process are exploited for efficient optimization. Experiments re-sults on cross-media retrieval task demonstrate local hash functions can better model thecomplex structure of large-scale datasets, and achieve higher empirical query accuracythan global-based ones.
     4.Besides the study on multiple datasets, we further discuss multiview models onsingle dataset with local and global combination. We propose a unified framework forprogressive image denoising via multiview kernel regression. We first construct a multi-scale representation of the target image, then progressively recover the degraded imagein the scale space from coarse to fine. On one hand, within each scale, a graph Laplacianregularization model represented by the implicit kernel is learned which simultaneously minimizes the least square error on the measured samples and preserves the global mani-fold structure of the image data space. On the other hand, between two successive scales,the proposed model is learned in a projected high dimensional feature space through theexplicit kernel mapping to describe the inter-scale correlation, in which the local structureregularity is learned and propagated from coarser to finer scales. Moreover, in our methodthe objective functions are formulated in the same form for intra-scale and inter-scale pro-cessing, but with different solutions obtained in different feature spaces. Therefore, theconsistency of local and global correlation in image can be better exploited and com-bined. Experiment results demonstrate the proposed method achieves comparable andeven better results for image denoising problems.
引文
*http://www.imt.liu.se/magnus/cca
    http://www.robots.ox.ac.uk/blaschko/
    http://faculty.ucmerced.edu/mcarreira-perpinan/software.html
    *http://www.imt.liu.se/magnus/cca
    http://www.robots.ox.ac.uk/blaschko/
    http://faculty.ucmerced.edu/mcarreira-perpinan/software.html
    1L. Wu, S. C. Hoi, R. Jin, et al. Distance Metric Learning from Uncertain SideInformation for Automated Photo Tagging[J]. ACM Trans. Intell. Syst. Technol.,2011,2:13:113:28.
    2E. Xing, A. Ng, M. Jordan, et al. Distance Metric Learning, with Applicationto Clustering with Side-information[M]//S. Becker, S. Thrun, K. Obermayer. Ad-vances in Neural Information Processing Systems15. Cambridge, MA, USA: MITPress,2003:505–512.
    3A. Bar-Hillel, T. Hertz, N. Shental, et al. Learning Distance Functions Using Equiv-alence Relations[C]//Proceedings of the Twentieh International Conference on Ma-chine Learning. Washington, DC, USA,2003:1118.
    4K. Weinberger, J. Blitzer, L. Saul. Distance Metric Learning for Large MarginNearest Neighbor Classification[M]//Advances in Neural Information ProcessingSystems18.2006.
    5H. Chang, D. Yeung. Local Smooth Metric Learning with Application to ImageRetrieval[C]//Proceedings of the Eleventh IEEE International Conference on Com-puter Vision.2007.
    6J. V. Davis, B. Kulis, P. Jain, et al. Information-theoretic Metric Learn-ing[C]//Proceedings of the Twenty-Fourth International Conference on MachineLearning.2007:209–216.
    7D.-Y. Yeung, H. Chang, G. Dai. A Scalable Kernel-based Semi-supervised MetricLearning Algorithm with Out-of-sample Generalization Ability[J]. Neural Compu-tation,2008,20(11).
    8D. Zhan, M. Li, Y.-F. Li, et al. Learning Instance Specific Distances Using MetricPropagation[C]//ICML’09: Proceedings of the26th Annual International Confer-ence on Machine Learning. New York, NY, USA: ACM,2009:1225–1232.
    9R. Jin, S. Wang, Y. Zhou. Regularized Distance Metric Learning:theory and Algo-rithm[M]//Y. Bengio, D. Schuurmans, J. Lafferty, et al.. Advances in Neural Infor-mation Processing Systems22.2009:862–870.
    10W. Liu, S. Ma, D. Tao, et al. Semi-supervised Sparse Metric Learning UsingAlternating Linearization Optimization[C]//KDD’10: Proceedings of the16thACM SIGKDD international conference on Knowledge discovery and data min-ing.2010:1139–1148.
    11Z. Lei, S. Z. Li. Coupled Spectral Regression for Matching HeterogeneousFaces[C]//Proceedings of the IEEE Computer Society Conference on Computer Vi-sion and Pattern Recognition.2009.
    12B. Li, H. Chang, S. Shan, et al. Coupled Metric Learning for Face Recognitionwith Degraded Images[C]//Proceedings of the First Asian Conference on MachineLearning.2009.
    13C. Xu, D. Tao, C. Xu. A Survey on Multi-view Learning[C]//CoRR abs/1304.5634.2013.
    14S. Dasgupta, M. Littman, D. McAllester. Pac Generalization Bounds for Co-training[C]//Advances in neural information processing systems.2002:375–382.
    15A. Blum, T. Mitchell. Combining Labeled and Unlabeled Data with Co-training[C]//In Proceedings of the eleventh annual conference on Computationallearning theory. ACM,1998:92–100.
    16V. Sindhwani, P. Niyogi, M. Belkin. A Co-regularization Approach to Semi-supervised Learning with Multiple Views[C]//In Workshop on Learning with Mul-tiple Views at ICML2005. Citeseer,2005.
    17W.Wang, Z. Zhou. Analyzing Co-training Style Algorithms[C]//Machine Learning:ECML2007.2007:454–465.
    18J. Friedman. Flexible Metric Nearest Neighbor Classification[R]. Tech. rep., Stan-ford University Statistics Department,1994.
    19C. Domeniconi, D. Gunopulos. Adaptive Nearest Neighbor Classification UsingSupport Vector Machines[C]//Advances in Neural Information Processing Systems.2002.
    20J. Goldberger, S. Roweis, G. Hinton, et al. Neighbourhood Components Anal-ysis[C]//Advances in Neural Information Processing Systems. MIT Press,2005,17:513–520.
    21L. K. Saul, S. T. Roweis, Y. Singer. Think Globally, Fit Locally: UnsupervisedLearning of Low Dimensional Manifolds[J]. Journal of Machine Learning Re-search,2003,4:119–155.
    22X. He, P. Niyogi. Locality Preserving Projections[C]//Advances in Neural Infor-mation Processing Systems16. MIT Press,2003.
    23L. Yang, R. Jin, R. Sukthankar, et al. An Efficient Algorithm for Local DistanceMetric Learning[C]//AAAI’06: Proceedings of the21st national conference on Ar-tificial intelligence. AAAI Press,2006:543–548.
    24A. Frome, Y. Singer, J. Malik. Image Retrieval and Classification Using LocalDistance Functions[C]//Advances in Neural Information Processing Systems19.2006:417–424.
    25A. Frome, F. Sha, Y. Singer, et al. Learning Globally-consistent Local DistanceFunctions for Shape-based Image Retrieval and Classification[C]//Proceedings ofthe Eleventh IEEE International Conference on Computer Vision.2007.
    26E. Xing, A. Ng, M. Jordan, et al. Distance Metric Learning with Application toClustering with Side-information[C]//Advances in neural information processingsystems. MIT;1998,2003:521–528.
    27K. Weinberger, J. Blitzer, L. Saul. Distance Metric Learning for Large MarginNearest Neighbor Classification[C]//Advances in neural information processingsystems. MIT;1998,2006,18:1473.
    28H. Hotelling. Relations between Two Sets of Variates[J]. Biometrika,1936,28:312–377.
    29D. Hardoon, S. Szedmak, J. Shawe-Taylor. Canonical Correlation Analysis: AnOverview with Application to Learning Methods[J]. Neural Computation,2004,16:2639–2664.
    30S. Akaho. A Kernel Method for Canonical Correlation Analysis[C]//In Proceedingsof the International Meeting of the Psychometric Society (IMPS2001).2001.
    31C. H. Ek, J. Rihan, P. H. S. Torr, et al. Ambiguity Modeling in LatentSpaces[C]//MLMI’08: Proceedings of the5th international workshop on Ma-chine Learning for Multimodal Interaction. Berlin, Heidelberg: Springer-Verlag,2008:62–73.
    32H. Zheng, M. Wang, Z.Li. Audio-visual Speaker Identification with Multi-viewDistance Metric Learning[C]//Proceedings of2010IEEE17th International Con-ference on Image Processing. Hong Kong,2010:4561–4564.
    33J. Goldberger, S. Roweis, G. Hinton, et al. Neighbourhood Components Anal-ysis[C]//Advances in Neural Information Processing Systems17. MIT Press,2004:513–520.
    34Z. Zhou, M. Li. Semi-supervised Regression with Co-training[C]//In InternationalJoint Conference on Artificial Intelligence (IJCAI).2005.
    35U. Brefeld, S. Wrobel, T. Scheffer, et al. Efficient Co-regularised Least SquaresRegression[C]//Proceedings of the Twenty-Third International Conference on Ma-chine Learning. ACM Press. ISBN,2006:137–144.
    36S. M. Kakade, D. P. Foster. Multi-view Regression via Canonical Correlation Anal-ysis[C]//In Proc. of Conference on Learning Theory.2007.
    37L. Bottou, V. Vapnik. Local Learning Algorithms[J]. Neural Computation,1992,4(6):888–900.
    38V. Vapnik. The Nature of Statistical Learning Theory[M]. New York: Springer,1995.
    39M. Wu, K. Yu, S. Yu, et al. Local Learning Projections[C]//Proceedings of theTwenty-Fourth International Conference on Machine Learning.2007:1039–1046.
    40K. Huang, H. Yang,, et al. Learning Large Margin Classifiers Locally andGlobally[C]//Proceedings of the Twenty-First International Conference on MachineLearning. New York, NY, USA: ACM,2004:401–408.
    41M. Wu, B. Scho¨lkopf. Transductive Classification via Local Learning Regulariza-tion[C]//Proceedings of the Eleventh International Workshop on Artificial Intelli-gence and Statistics.2007:628–635.
    42M. Wu, B. Scho¨lkopf. Transductive Classification via Local Learning Regulariza-tion[C]//Proceedings of the Eleventh International Workshop on Artificial Intelli-gence and Statistics.2007.
    43M. Wu, B. Scho¨lkopf. A Local Learning Approach for Clustering[C]//Advancesin Neural Information Processing Systems19. Cambridge, MA: MIT Press,2006:1529–1536.
    44F. Wang, C. Zhang, T. Li. Clustering with Local and Global Regularization[C]//The22nd AAAI Conference on Artificial Intelligence.2007:657–662.
    45D. Cai, X. He, J. Han. Spectral Regression for Efficient Regularized SubspaceLearning[C]//Proceedings of the Eleventh IEEE International Conference on Com-puter Vision.2007.
    46L. Zelnik-Manor, P. Perona. Self-tuning Spectral Clustering[C]//Advances in Neu-ral Information Processing Systems17. MIT Press,2005:16011608.
    47V. Sindhwani, P. Niyogi. A Co-regularized Approach to Semi-supervised Learningwith Multiple Views[C]//Proceedings of the ICML Workshop on Learning withMultiple Views.2005.
    48D. Levin. The Approximation Power of Moving Least Squares[J]. Mathematics ofComputation,1998,67(224):1517–1531.
    49T. Hastie, R. Tibshirani, J. Friedman. The Elements of Statistical Learning: DataMining, Inference, and Prediction[M]. Springer-Verlag,2001.
    50M. Belkin, P. Niyogi, V. Sindhwani. Manifold Regularization: A Geometric Frame-work for Learning from Examples[J]. J. Mach. Learn. Res.,2004:2399–2434.
    51Y. Shao, Y. Zhou, D. Cai. Variational Inference with Graph Regularization forImage Annotation[J]. ACM Trans. Intell. Syst. Technol.,2011,2:11:111:21.
    52I. Jolliffe. Principal Component Analysis, Second Edition[M]. New York: Springer,2002.
    53K. B. Petersen, M. S. Pedersen. The Matrix Cookbook. http://matrixcookbook.com,2008.
    54J. Ham, D. D. Lee, L. K. Saul. Semisupervised Alignment of Mani-folds[C]//Proceedings of the Tenth International Workshop on Artificial Intelli-gence and Statistics.2005:120–127.
    55H. Gong, C. Pan, Q. Yang, et al. A Semi-supervised Framework for Mapping Datato the Intrinsic Manifold[C]//Proceedings of the Tenth IEEE International Confer-ence on Computer Vision.2005,1:98–105.
    56L. Xiong, F. Wang, C. Zhang. Semi-definite Manifold Alignment[C]//Proceedingsof the18th European Conference on Machine Learning (ECML).2007:773–781.
    57A. P. Shon, A. H. K. Grochow, R. Rao. Learning Shared Latent Structure for ImageSynthesis and Robotic Imitation[C]//Advances in Neural Information ProcessingSystems18.2006:1233–1240.
    58C. Wang, S. Mahadevan. Manifold Alignment Using Procrustes Analy-sis[C]//Proceedings of the Twenty-Fifth International Conference on MachineLearning.2008:1120–1127.
    59S. Nene, S. Nayar, H. Murase. Columbia Object Image Library: Coil-20[R]. Tech.Rep. CUCS-006-96,1996.
    60M. J. Lyons, M. Kamachi, J. Gyoba, et al. Coding Facial Expressions with GaborWavelets[C]//In Procedings of the third IEEE Automatic Face and Gesture Recog-nition.1998.
    61A. A. Nielsen. Multiset Canonical Correlations Analysis and Multispectral, TrulyMultitemporal Remote Sensing Data[J]. IEEE Transactions on Image Processing,2002,11(3):293–305.
    62W. Zheng, X. Zhou, C. Zou, et al. Facial Expression Recognition Using KernelCanonical Correlation Analysis[J]. IEEE Transactions on Neural Networks,2006,17(1):233–238.
    63T. Melzer, M. Reiter, H. Bischof. Appearance Models Based on Kernel CanonicalCorrelation Analysis[J]. Pattern Recognition,2003,36:19611971.
    64P. L. Lai, C. Fyfe. Canonical Correlation Analysis Using Artificial Neural Net-works[C]//In European Symposium on Artificial Neural Networks, ESANN98.1998.
    65T. Sun, S. Chen. Locality Preserving CCA with Applications to Data Visualizationand Pose Estimation[J]. Image and Vision Computing,2007.
    66L. Sun, S. Ji, J. Ye. A Least Squares Formulation for Canonical Correlation Anal-ysis[C]//ICML08.2008:1024–1031.
    67S. Yan, D. Xu, B. Zhang, et al. Graph Embedding and Extension: A General Frame-work for Dimensionality Reduction[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2007,29:40–51.
    68F. R. K. Chung. Spectral Graph Theory[M]. Rhode Island: American MathematicalSociety,1997.
    69X. Zhu. Semi-supervised Learning Literature Survey[R]. Tech. Rep.1530, Depart-mant of Computer Sciences, University of Wisconsin at Madison, Madison, WI,2006.
    70S. Boyd, L. Vandenberghe. Convex Optimization[M]. New York, NY, USA: Cam-bridge University Press,2004.
    71D. P. Bertsekas. Nonlinear Programming[J]. Athena Scientific,1999.
    72C. Paige, M. Saunders. LSQR: An Algorithm for Sparse Linear Equations andSparse Least Squares[J]. ACM Transations on Mathematical Software,1982,8:43–71.
    73M. Norouzi, A. Punjani, D. Fleet. Fast Search in Hamming Space with Multi-index Hashing[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR).2012.
    74A. Andoni, P. Indyk. Near-optimal Hashing Algorithms for Approximate NearestNeighbor in High Dimensions[C]//47th Annual IEEE Symposium on Foundationsof Computer Science (FOCS’06).2006.
    75Q. Lv, W. Josephson, Z. Wang, et al. Multi-probe Lsh: Efficient Indexing for High-dimensional Similarity Search[C]//Proceedings of the33rd international conferenceon Very large data bases.2007.
    76K. Min, L. Yang, J. Wright, et al. Compact Projection: Simple and Efficient NearNeighbor Search with Practical Memory Requirements[C]//IEEE Conference onComputer Vision and Pattern Recognition (CVPR).2010.
    77R. Salakhutdinov, G. Hinton. Semantic Hashing[C]//SIGIR workshop on Informa-tion Retrieval and applications of Graphical Models.2007.
    78Y. Weiss, A. B. Torralba, R. Fergus. Spectral Hashing[C]//Advances in NeuralInformation Processing Systems21. MIT Press,2008:1753–1760.
    79M. Norouzi, D. Fleet. Minimal Loss Hashing for Compact BinaryCodes[C]//Proceedings of the28th International Conference on Machine Learning(ICML-11).2011.
    80W. Liu, J. Wang, R. Ji, et al. Supervised Hashing with Kernels[C]//IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR).2012.
    81D. Zhang, J. Wang, D. Cai, et al. Self-taught Hashing for Fast SimilaritySearch[C]//Proceedings of the33rd international ACM SIGIR conference on Re-search and development in information retrieval. New York, NY, USA: ACM,2010:18–25. http://doi.acm.org/10.1145/1835449.1835455.
    82B. Kulis, T. Darrell. Learning to Hash with Binary Reconstructive Embed-dings[C]//Advances in Neural Information Processing Systems22.2009:1042–1050.
    83J. Wang, S. Kumar, S.-F. Chang. Sequential Projection Learning for Hashing withCompact Codes[C]//Proceedings of the27th International Conference on MachineLearning (ICML-10).2010:1127–1134.
    84J. Wang, S. Kumar, S.-F. Chang. Semi-supervised Hashing for Large-scaleSearch[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(12):2393–2406.
    85W. Liu, J. Wang, S. fu Chang. Hashing with Graphs[C]//Proceedings of the28thInternational Conference on Machine Learning (ICML-11).2011.
    86J. He, W. Liu, S.-F. Chang. Scalable Similarity Search with Optimized KernelHashing[C]//Proceedings of the16th ACM SIGKDD international conference onKnowledge discovery and data mining.2010.
    87N. Rasiwasia, J. Costa Pereira, E. Coviello, et al. A New Approach to Cross-modalMultimedia Retrieval[C]//Proceedings of the international conference on Multi-media. New York, NY, USA: ACM,2010:251–260. http://doi.acm.org/10.1145/1873951.1873987.
    88T.-S. Chua, J. Tang, R. Hong, et al. Nus-wide: A Real-world Web Image Databasefrom National University of Singapore[C]//Proceedings of the ACM InternationalConference on Image and Video Retrieval. New York, NY, USA: ACM,2009:48:1–48:9. http://doi.acm.org/10.1145/1646396.1646452.
    89Y. Zhen, D.-Y. Yeung. Co-regularized Hashing for Multimodal Data[M]//Advancesin Neural Information Processing Systems25.2012:1385–1393.
    90S. Kumar, R. Udupa. Learning Hash Functions for Cross-view SimilaritySearch[C]//Proceedings of the Twenty-Second international joint conference on Ar-tificial Intelligence.2011.
    91M. Bronstein, A. Bronstein, F. Michel, et al. Data Fusion Through Cross-modalityMetric Learning Using Similarity-sensitive Hashing[C]//IEEE Conference on Com-puter Vision and Pattern Recognition.2010.
    92Y. Zhen, D.-Y. Yeung. A Probabilistic Model for Multimodal Hash FunctionLearning[C]//Proceedings of the18th ACM SIGKDD international conference onKnowledge discovery and data mining.2012.
    93B. Scho¨lkopf, A. J. Smola. Learning with Kernels: Support Vector Machines, Reg-ularization, Optimization, and Beyond[M]. MIT,2001.
    94K. Yu, T. Zhang, Y. Gong. Nonlinear Learning Using Local Coordinate Cod-ing[M]//Y. Bengio, D. Schuurmans, J. Lafferty, et al.. Advances in Neural Infor-mation Processing Systems22.2009:2223–2231.
    95J. Wang, A. Kalousis, A. Woznica. Parametric Local Metric Learning for NearestNeighbor Classification[M]//Advances in Neural Information Processing Systems25.2012:1610–1618.
    96J. C. Gemert, J.-M. Geusebroek, C. J. Veenman, et al. Kernel Codebooks forScene Categorization[C]//Proceedings of the10th European Conference on Com-puter Vision: Part III. Berlin, Heidelberg: Springer-Verlag,2008:696–709. http://dx.doi.org/10.1007/978-3-540-88690-752.
    97M. Belkin, P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and DataRepresentation[J]. Neural Computation,2003,15:1373–1396.
    98D. Zhai, H. Chang, S. Shan, et al. Multiview Metric Learning with Global Con-sistency and Local Smoothness[J]. ACM Trans. Intell. Syst. Technol.,2012,3(3):53:1–53:22. http://doi.acm.org/10.1145/2168752.2168767.
    99L. Ladicky, P. H. S. Torr. Locally Linear Support Vector Machines[C]//Proceedingsof the28th International Conference on Machine Learning (ICML-11).2011.
    100D. G. Lowe. Distinctive Image Features from Scale-invariant Keypoints[J]. Int. J.Comput. Vision,2004,60(2).
    101D. M. Blei, A. Y. Ng, M. I. Jordan. Latent Dirichlet Allocation[J]. J. Mach. Learn.Res.,2003,3:993–1022.
    102W. Hong, J. Wright, K. Huang, et al. Multiscale Hybrid Linear Models for LossyImage Representation[J]. IEEE Trans. on Image Processing,2006,15(12):3655–3671.
    103E. P. Simoncelli, B. A. Olshausen. Natural Image Statistics and Neural Represen-tation[J]. Annual Review of Neuroscience,2001,24:1193–1216.
    104A. Srivastava, A. B. Lee, E. P. Simoncelli, et al. On Advances in Statistical Mod-eling of Natural Images[J]. Journal of Mathematical Imaging and Vision,2003,18(1):17–33.
    105A. Buades, B. Coll, J. Morel. Nonlocal Image and Movie Denoising[J]. Interna-tional Journal of Computer Vision,2008,76(2):123–139.
    106X. Zhu. Semi-supervised Learning Literature Survey[R]. Tech. Rep.1530, Depart-ment of Computer Sciences, University of Wisconsin, Madison,2005.
    107H. Takeda, S. Farsiu, P. Milanfar. Kernel Regression for Image Processing andReconstruction[J]. IEEE Trans. on Image Processing,2007,16(2):349–366.
    108J. Cai, R. H. Chan, M. Nikolova. Fast Two-phase Image Deblurring under ImpulseNoise[J]. Journal of Math Imaging Vision,2010,36:46–53.
    109Y.-R. Li, L. Shen, D.-Q. Dai, et al. Framelet Algorithms for De-blurring ImagesCorrupted by Impulse Plus Gaussian Noise[J]. IEEE Trans. on Image Processing,2011,20(7):1822–1837.
    110H. Hwang, R. A. Haddad. Adaptive Median Filters: New Algorithms and Re-sults[J]. IEEE Transactions on Image Processing,1995,4:499–502.
    111S.-J. Ko, Y. Lee. Center Weighted Median Filters and Their Applications to ImageEnhancement[J]. IEEE Trans. Circuits Syst.,1991,38:984–993.
    112R. H. Chan, Y. Dong, M. Hintermller. An Efficient Two-phase L1-TV Methodfor Restoring Blurred Images with Impulse Noise[J]. IEEE Transactions on ImageProcessing,2010,19(7):17311739.
    113S. X. Yu, J. Shi. Multiclass Spectral Clustering[C]//Proceedings of the Ninth IEEEInternational Conference on Computer Vision.2003,1:313.
    114E. Kidron, Y. Schechner, M. Elad. Pixels That Sound[C]//CVPR’05.2005:88–95.
    115V. Athitsos, M. Hadjieleftheriou, G. Kollios, et al. Query-sensitive Embed-dings[C]//Proceedings of the ACM SIGMOD international conference on Manage-ment of data. New York, NY, USA: ACM,2005:706–717.
    116K. Weinberger, F. Sha, L. K. Saul. Learning a Kernel Matrix for Nonlinear Dimen-sionality Reduction[C]//Proceedings of the Twenty First International Conferenceon Machine Learning (ICML-04). Banff, Canada,2004:839–846.
    117E. Jordaan. Development of Robust Inferential Sensors: Industrial Application ofSupport Vector Machines for Regression[D]Technical University Eindhoven,2002.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700