复杂场景下的多视图学习方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
多视图学习是指针对有多个特征表示数据的智能学习方式。在过去的十几年中,多视图学习受到了极大关注并且有许多理论成果和实际应用算法。但是到现在为止,大部分相关工作都集中在传统的多视图分类、聚类和降维任务,并且只能被应用到全配对场景。然而现实应用持续引出新的复杂学习场景。比如,多视图分类和检索任务有时依赖于合适的度量,因而需要解决多视图度量学习的任务。另外在有些严苛的场景中,多视图数据在收集和传输过程中会由于设备故障、恶意攻击和应用场景限制等原因不能保证是完全配对的。因此,如何设计适用于该场景的多视图学习算法也是本文所主要考虑的问题。而且,将多视图学习的方法移植到单视图学习中也能够提升单视图学习的性能。本文主要贡献总结如下,
     1)提出一个co‐training风格的多视图度量学习算法co‐metric。该算法受co‐training的学习思想启发,为每个视图学习一个度量,并且通过使它们互相教的方式来提升它们的性能。而且,它能利用现有的单视图学习算法,因而实现十分简单。该算法的关键步骤是用学到的度量来挑选出可靠标号的样本。为此,我们设计了一个简单有效的方法:将K近邻算法的参数K设置为一个较大的正数。实验表明了该算法的有效性。
     2)提出一个通过同时对齐先验和后验概率学习在完全无配对场景下的跨视图度量的模型MLHD。该模型首先将每个视图的样本映射到一个公共空间中,然后同时对齐它们的先验概率p(sample)和后验概率p(label|sample)。通过调整和变量替换,该模型能够只用一个半正定矩阵来重新参数化。通过引入一个对数行列式函数来正则化该矩阵参数,MLHD模型能够用Bregman投影算法来优化,并且能够自动保持矩阵的半正定性。之后,我们证明该模型有一个等价的只依赖于样本内积的优化问题,因而能够被方便地核化。实验证明,该模型在跨语言检索和跨域的目标识别任务中有良好的表现。
     3)介绍了一种新的辅助信息,即跨视图的must‐link和cannot‐link,并且将其应用到完全无配对场景下的多视图分类任务中。这种新的辅助信息是广泛使用的单视图must‐link和cannot‐link的一个自然推广,指示了在不同视图中的两个样本是否有相同的标号。我们改造了经典的正则化模型,通过添加跨视图的must‐link和cannot‐link正则化项来将该辅助信息应用到完全无配对场景下的多视图分类任务中。实验证实了该辅助信息的有效性。
     4)提出了一个在单视图数据上通过构造一个新的数据聚类视图来同时学习分类和聚类的模型。该模型利用数据聚类视图来结合分类和聚类任务,并且能够使用块坐标下降算法来优化。和先前Cai等人提出的方法相比,该模型更加灵活,能够借助流形正则化推广到半监督场景中。而且速度也快了将近一个数量级。
Multi-view learning focus on the learning tasks whose data have naturally multiple featurerepresentations. In the past decade, multi-view learning has been studied widely and gained manytheoretical results and practical algorithms. However, so far, most of the works focus on themulti-view classification, clustering and dimension reduction tasks and are only applicable underthe full-paired circumstance. In contrast, real world applications constantly bring out new learningsettings and require new solutions. For example, multi-view classification and retrieval sometimeslargely depends on suitable metrics, which needs multi-view metric learning model. Anotherinteresting application of multi-view learning is to transplant the learning idea into single-viewlearning. This dissertation proposes to construct a new view to reveal the data cluster structure anduses it to learn classification and clustering simultaneously. Furthermore under some rigidenviroments, the collected multi-view data may not fully paired due to data transmition, maliciousattack, or simply application restriction. Thus how to design proper algorithms to meet the newchallanges is the main concern of this dissertation. The main contributions of this dissertation aresummarized below,
     1) Develop a co-training style multi-view metric learning algorithm called co-metric. Followingthe co-training’s learning scheme, co-metric learns metrics for each data view and makes themto teach each other to boost their performances under the semi-supervised circumstance. Thekey step in co-metric algorithm is choosing the confidently-labeled samples in each view withtheir learned metric. To achieve this goal, we design a simple but effective method by settingthe parameter K of K-nearest neighbor classification to a large integer. The algorithm uses theexisting single-view metric learning algorithm and is very easy to implement. Experimentsdemostrate its effectiveness.
     2) Propose a cross-view metric learning model under the totally-unpaired settings. The model,called MLHD, first maps the samples in each view into a common space, then aligns theirpriors p(sample)s and posterior p(label|sample) at the same time. Further, it can bereparameterized only with a positive semi-definite matrix. And by introducing a LogDetfunction to regularize this matrix parameter, the model can be easily optimized with Bregmanprojection algorithm which automatically maintains the positive semi-definite property of thematrix.. Also, it is proved that this model has an equivalent optimization problem which onlydepends on the inner product of the samples. As a result, the model can be convenientlykernelized. Experiments on the cross-language retrieval and cross-domain object recognition task show its effectiveness.
     3) Introduce a new side information, cross-view must-link and cross-view cannot-link, and applythem in the multi-view classification under totally-unpaired settings. The new sideinformation is a natural extention of widely-used single-view must-link and single-viewcannot-link, and indicates whether the two samples in different views have the same label ornot. To apply this information into multi-view classification with totally-unpaired data, wemodify the classical regularization model and add new side information related regularizationterm. Experiments demostrate the effectiveness of this side information.
     4) Propose a simultaneously learning classification and clustering model for single-view datathrough constructing a new cluster structure view. The proposed model uses the clusterstructure view to connect the classification and clustering tasks for a single view data and canbe optimized by block gradient descendent algorithm. Comparing with the previous methodproposed by Cai et al., the model is more flexible and can be easily generalized tosemi-supervised circumstance by manifold regularization. Also, this algorithm is much faster.
引文
1.周志华,王珏,机器学习及其应用.2009:清华大学出版社.
    2.边肇祺,张学工,模式识别(第二版).1999:清华大学出版社.
    3. Mitchell, T., Machine learning.1997: McGraw Hill.
    4.中华人民共和国国务院,国家中长期科学和技术发展规划纲要(2006-2020).
    5. Chapelle, O., B. Sch lkopf, and A. Zien, Semi-supervised learning. Vol.2.2006: MITpress Cambridge, MA:.
    6. Zhu, X., Semi-supervised learning literature survey.2005.
    7. Pan, S.J. and Q. Yang, A survey on transfer learning. Knowledge and Data Engineering,IEEE Transactions on,2010.22(10): p.1345-1359.
    8. Blum, A. and T. Mitchell. Combining labeled and unlabeled data with co-training. inProceedings of the eleventh annual conference on Computational learningtheory(COLT1998).1998: ACM.
    9. Sindhwani, V., P. Niyogi, and M. Belkin. A co-regularization approach tosemi-supervised learning with multiple views. in Proceedings of the Workshop onLearning with Multiple Views,22nd ICML (2005) Key: citeulike:4254671.2005.
    10. Nigam, K. and R. Ghani. Analyzing the effectiveness and applicability of co-training. inProceedings of the9th international conference on Information and knowledgemanagement(CIKM2000).2000: ACM.
    11. Li, G., S.C.H. Hoi, and K. Chang. Two-view transductive support vector machines. inSIAM International Conference on Data Mining(SDM2010).2010.
    12. Bickel, S. and T. Scheffer. Multi-view clustering. in Proceedings of the IEEEInternational Conference on Data Mining(ICDM2004).2004.
    13. de Sa, V.R., et al., Multi-view kernel construction. Machine learning,2010.79(1): p.47-71.
    14. Wu, Q., Y. Ying, and D.X. Zhou, Multi-kernel regularized classifiers. Journal ofComplexity,2007.23(1): p.108-134.
    15. Rakotomamonjy, A., et al. More efficiency in multiple kernel learning. in Proceedings ofthe24th international conference on Machine learning(ICML2007).2007: ACM.
    16. Muslea, I., S. Minton, and C.A. Knoblock. Active+semi-supervised learning=robustmulti-view learning. in Proceedings of19th International Conference on MachineLearning(ICML02).2002.
    17. Yarowsky, D. Unsupervised word sense disambiguation rivaling supervised methods. inProceedings of the33rd annual meeting on Association for ComputationalLinguistics(ACL1995).1995: Association for Computational Linguistics.
    18. Brefeld, U. and T. Scheffer. Co-EM support vector learning. in Proceedings of thetwenty-first international conference on Machine learning(ICML2004).2004: ACM.
    19. Zhang, M.L. and Z.H. Zhou, CoTrade: Confident co-training with data editing. Systems,Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on,2011(99): p.1-15.
    20. Kumar, A., H Daumé III. A co-training approach for multi-view spectral clustering.2011.
    21. Muslea, I., S. Minton, and C.A. Knoblock. Selective sampling with redundant views. inProceedings of the Seventeenth National Conference on Artificial Intelligence and TwelfthConference on Innovative Applications of Artificial Intelligence.2000: Menlo Park, CA;Cambridge, MA; London; AAAI Press; MIT Press;1999.
    22. Farquhar, J., et al., Two view learning: SVM-2K, theory and practice, in Advances inNeural Information Processing Systems(NIPS2005).2005.
    23. Xing, E.P., et al., Distance metric learning, with application to clustering withside-information. Advances in neural information processing systems,2002.15: p.505-512.
    24. Davis, J.V., et al. Information-theoretic metric learning. in Proceedings of the24thinternational conference on Machine learning(ICML2007).2007: ACM.
    25. Globerson, A. and S. Roweis, Metric learning by collapsing classes. Advances in neuralinformation processing systems(NIPS2006),2006.18: p.451.
    26. Weinberger, K.Q. and L.K. Saul, Distance metric learning for large margin nearestneighbor classification. The Journal of Machine Learning Research,2009.10: p.207-244.
    27. Zheng, H., M. Wang, and Z. Li. Audio-visual speaker identification with multi-viewdistance metric learning. in Proceedings of the17th IEEE International Conference onImage Processing (ICIP2010).2010: IEEE.
    28. Zhai, D.a.C., H. and Shan, S. and Chen, X. and Gao, W., Multi-View Metric Learning withGlobal Consistency and Local Smoothness. ACM Transactions on Intelligent Systems andTechnology,2011.
    29. Abney, S. Bootstrapping. in Proceedings of40th Annual Meeting of the Association forComputational Linguistics(ACL2002).2002: Association for Computational Linguistics.
    30. Balcan, M.F., A. Blum, and Y. Ke, Co-training and expansion: Towards bridging theoryand practice, in Advances in Neural Information Processing Systems (NIPS2005).2004. p.
    154.
    31. Wang, W. and Z.H. Zhou, Analyzing co-training style algorithms. Proceedings of the18thEuropean Conference on Machine Learning(ECML2007),2007: p.454-465.
    32. Chen, M., K.Q. Weinberger, and Y. Chen. Automatic Feature Decomposition for SingleView Co-training. in Proceedings of the28th International Conference on MachineLearning(ICML2011).2011.
    33. Du, J., C. Ling, and Z.H. Zhou, When does co-training work in real data? Advances inKnowledge Discovery and Data Mining,2009: p.596-603.
    34. Goldman, S. and Y. Zhou. Enhancing supervised learning with unlabeled data. inProceedings of the Seventeenth International Conference on MachineLearning(ICML2000).2000: Citeseer.
    35. Guo, Q., et al. Effective and efficient microprocessor design space exploration usingunlabeled design configurations. in Twenty-Second International Joint Conference onArtificial Intelligence.2011.
    36. Zhou, Z.H. and M. Li, Semi-supervised regression with co-training style algorithm. IEEETransactions on Knowledge and Data Engineering,2005.
    37. Kimura, A., et al., SemiCCA: Efficient semi-supervised learning of canonical correlations.20th International Conference on Pattern Recognition (ICPR2010),2010.
    38. Blaschko, M., C. Lampert, and A. Gretton, Semi-supervised laplacian regularization ofkernel canonical correlation analysis. Machine Learning and Knowledge Discovery inDatabases,2008: p.133-145.
    39. Lampert, C. and O. Kr mer, Weakly-paired maximum covariance analysis for multimodaldimensionality reduction and transfer learning, in Proceedings of The11th EuropeanConference on Computer Vision (ECCV2010).2010. p.566-579.
    40. Sun, T., et al. Discriminative canonical correlation analysis with missing samples. inComputer Science and Information Engineering,2009WRI World Congress on.2009:IEEE.
    41. Gu, J., S. Chen, and T. Sun, Localization with incompletely paired data in complexwireless sensor network. Wireless Communications, IEEE Transactions on,2011(99): p.1-9.
    42. Cai, W., S. Chen, and D. Zhang, A simultaneous learning framework for clustering andclassification. Pattern Recognition,2009.42(7): p.1248-1259.
    43. Weinberger, K.Q. and L.K. Saul. Fast solvers and efficient implementations for distancemetric learning. in Proceedings of the25th international conference on Machinelearning(ICML2008).2008: ACM.
    44. Shalev-Shwartz, S., Y. Singer, and A.Y. Ng. Online and batch learning of pseudo-metrics.in Advances in Neural Information Processing Systems(NIPS2004).2004: ACM.
    45. Guillaumin, M., J. Verbeek, and C. Schmid. Is that you? Metric learning approaches forface identification. in IEEE12th International Conference on ComputerVision(ICCV2009).2009: IEEE.
    46. Guo, R. and S. Chakraborty, Bayesian adaptive nearest neighbor. Statistical Analysis andData Mining,2010.3(2): p.92-105.
    47. Holmes, C. and N. Adams, A probabilistic nearest neighbour method for statisticalpattern recognition. Journal of the Royal Statistical Society: Series B (StatisticalMethodology),2002.64(2): p.295-306.
    48. Tomasev, N., et al. A probabilistic approach to nearest-neighbor classification: naivehubness bayesian kNN. in Proceedings of the20th ACM international conference onInformation and knowledge management(CIKM2011).2011: ACM.
    49. Goldberger, J., et al., Neighbourhood components analysis, in Advances in NeuralInformation Processing Systems(NIPS2004).2004.
    50. Manocha, S. and M. Girolami, An empirical analysis of the probabilistic K-nearestneighbour classifier. Pattern Recognition Letters,2007.28(13): p.1818-1824.
    51. Cucala, L., et al., A Bayesian reassessment of nearest-neighbor classification. Journal ofthe American Statistical Association,2009.104(485): p.263-273.
    52. Sun, T., et al. A novel method of combined feature extraction for recognition.2008: IEEE.
    53. Frank, A. and A. Asuncion, UCI machine learning repository.2010.
    54. Dumais, S.T., Latent semantic analysis. Annual review of information science andtechnology,2004.38(1): p.188-230.
    55. Shawe-Taylor, N. and A. Kandola. On kernel target alignment. in The Proceedings of theSeventh IEEE International Conference on Computer Vision(ICCV1999).2002: The MITPress.
    56. Duan, L., D. Xu, and I. Tsang, Learning with Augmented Features for HeterogeneousDomain Adaptation, in Proceedings of the twenty-first international conference onMachine learning(ICML2004).2012.
    57. Shi, X., et al. Transfer learning on heterogenous feature spaces via spectraltransformation. in IEEE10th International Conference on Data Mining (ICDM2010).2010: IEEE.
    58. Wang, B., et al. Heterogeneous cross domain ranking in latent space. in Proceedings ofthe18th ACM conference on Information and knowledge management(CIKM2009).2009:ACM.
    59. Wang, C. and S. Mahadevan. Heterogeneous domain adaptation using manifoldalignment. in Proceedings of the Twenty-Second international joint conference onArtificial Intelligence(IJCAI2011).2011: AAAI Press.
    60. Zhang, Y. and D.Y. Yeung, Multi-Task Learning in Heterogeneous Feature Spaces, inProceedings of the25th AAAI Conference on Artificial Intelligence(AAAI2011).2011. p.
    1.
    61. Qi, G.J., C. Aggarwal, and T. Huang, Transfer Learning of Distance Metrics byCross-Domain Metric Sampling across Heterogeneous Spaces. SIAM InternationalConference on Data Mining(SDM2012).
    62. Saenko, K., et al., Adapting visual category models to new domains. Proceedings of the11th European conference on Computer vision(ECCV2010),2010: p.213-226.
    63. Kulis, B., K. Saenko, and T. Darrell. What you saw is not what you get: Domainadaptation using asymmetric kernel transforms. in Proc. IEEE Conference on ComputerVision and Pattern Recognition (CVPR2011).2011: IEEE.
    64. Kulis, B., M. Sustik, and I. Dhillon. Learning low-rank kernel matrices. in The Journal ofMachine Learning Research(JMLR2009).2006: ACM.
    65. Gretton, A., et al., A kernel method for the two-sample problem, in ADVANCES INNEURAL INFORMATION PROCESSING SYSTEMS(NIPS2007).2008.
    66. Micchelli, C.A., Y. Xu, and H. Zhang, Universal kernels. The Journal of MachineLearning Research,2006.7: p.2651-2667.
    67. Kulis, B., M.A. Sustik, and I.S. Dhillon, Low-rank kernel learning with bregman matrixdivergences. The Journal of Machine Learning Research,2009.10: p.341-376.
    68. Bregman, L.M., The relaxation method of finding the common point of convex sets and itsapplication to the solution of problems in convex programming. USSR computationalmathematics and mathematical physics,1967.7(3): p.200-217.
    69. Censor, Y. and S.A. Zenios, Parallel optimization: Theory, algorithms, and applications.1997: Oxford University Press, USA.
    70. Jain, P., et al., Metric and kernel learning using a linear transformation. Journal ofMachine Learning Research(JMLR2009),2009.
    71. Smola, A.J. and B. Sch lkopf, Learning with kernels.1998.
    72. Stewart, G.W. and J. Sun, Matrix perturbation theory. Vol.175.1990: Academic pressNew York.
    73. Amini, M.R., N. Usunier, and C. Goutte. Learning from multiple partially observedviews-an application to multilingual text categorization. in In Advances in NeuralInformation Processing Systems22(NIPS2009).2010.
    74. Bay, H., T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features.9th EuropeanConference on Computer Vision(ECCV2006),2006: p.404-417.
    75. Lowe, D.G. Object recognition from local scale-invariant features. in Eighth IEEEInternational Conference on Data Mining(ICDM2008).1999: Ieee.
    76. Basu, S., A. Banerjee, and R.J. Mooney. Active semi-supervision for pairwise constrainedclustering. in In Proceedings of the2004SIAM International Conference on Data Mining(SDM-04).2004.
    77. Yan, R., et al., Learning with Pairwise Constraints for Video Object Classification.Constrained clustering: advances in algorithms, theory, and applications,2009: p.397.
    78. Nguyen, N. and R. Caruana, Improving classification with pairwise constraints: Amargin-based approach. Machine Learning and Knowledge Discovery in Databases,2008: p.113-124.
    79. Li, Z., J. Liu, and X. Tang. Pairwise constraint propagation by semidefinite programmingfor semi-supervised classification. in Proceedings of the25th international conference onMachine learning(ICML2008).2008: ACM.
    80. Hertz, T., et al. Enhancing image and video retrieval: Learning via equivalenceconstraints. in Proceedings of IEEE Computer Society Conference on Computer Visionand Pattern Recognition(CVPR2003).2003: IEEE.
    81. Vapnik, V.N., The nature of statistical learning theory.2000: Springer-Verlag New YorkInc.
    82. Bertsekas, D.P., Nonlinear programming.1999: Athena Scientific.
    83. Evgeniou, T. and M. Pontil. Regularized multi--task learning. in Proceedings of the tenthACM SIGKDD international conference on Knowledge discovery and data mining.2004:ACM.
    84. Evgeniou, A.A.T. and M. Pontil. Multi-task feature learning. in In Advances in NeuralInformation Processing Systems (NIPS2007).2007: The MIT Press.
    85. Jebara, T. Multi-task feature and kernel selection for SVMs. in Proceedings of thetwenty-first international conference on Machine learning(ICML2004).2004: ACM.
    86. Jacob, L., F. Bach, and J.P. Vert, Clustered multi-task learning: A convex formulation.Advances in Neural Information Processing Systems(NIPS2008),2008.
    87. Obozinski, G., B. Taskar, and M.I. Jordan, Joint covariate selection and joint subspaceselection for multiple classification problems. Statistics and Computing,2010.20(2): p.231-252.
    88. Chen, J., et al. A convex formulation for learning shared structures from multiple tasks. inProceedings of the26th Annual International Conference on MachineLearning(ICML2009).2009: ACM.
    89. Gu, Q. and J. Zhou. Learning the shared subspace for multi-task clustering andtransductive transfer classification. in Proceedings of the2009Ninth IEEE InternationalConference on Data Mining(ICDM2009).2009: IEEE.
    90. R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification.2001: Wiley New York.
    91. Goldberg, A., X. Zhu, and S. Wright. Dissimilarity in graph-based semi-supervisedclassification. in Eleventh International Conference on Artificial Intelligence andStatistics(AISTAT2007).2007.
    92. Tong, W. and R. Jin. Semi-supervised learning by mixed label propagation. inProceedings of the22nd national conference on Artificial intelligence(AAAI2007).2007:Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press;1999.
    93. Belkin, M., P. Niyogi, and V. Sindhwani, Manifold regularization: A geometricframework for learning from labeled and unlabeled examples. The Journal of MachineLearning Research,2006.7: p.2399-2434.
    94. Zhou, Z.H. and M. Li, Semi-supervised learning by disagreement. Knowledge andInformation Systems,2010.24(3): p.415-439.
    95. Zhou, Z.H., Unlabeled Data and Multiple Views. Lecture Notes in Computer Science,2012: p.1-7.
    96. Li, X. and N. Ye, Grid‐and dummy‐cluster‐based learning of normal and intrusiveclusters for computer intrusion detection. Quality and Reliability EngineeringInternational,2002.18(3): p.231-242.
    97. Li, X. and N. Ye, A supervised clustering and classification algorithm for mining datawith mixed variables. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEETransactions on,2006.36(2): p.396-406.
    98. Maglogiannis, I., et al., Radial basis function neural networks classification for therecognition of idiopathic pulmonary fibrosis in microscopic images. InformationTechnology in Biomedicine, IEEE Transactions on,2008.12(1): p.42-54.
    99. Setnes, M. and R. Babuska, Fuzzy relational classifier trained by fuzzy clustering.Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on,1999.29(5):p.619-625.
    100. Yang, Z.R., A novel radial basis function neural network for discriminant analysis. IEEETransactions on Neural Networks,2006.17(3): p.604-612.
    101. Ye, N. and X. Li, A scalable, incremental learning algorithm for classification problems.Computers&industrial engineering,2002.43(4): p.677-692.
    102. Haikin, S., Neural Networks: A Comprehensive Foundation.1998: Prentice Hall.
    103. Xue, H., S. Chen, and Q. Yang, Discriminatively regularized least-squares classification.Pattern Recognition,2009.42: p.93-104.
    104. Kim, S.W. and B.J. Oommen, Enhancing prototype reduction schemes with LVQ3-typealgorithms. Pattern Recognition,2003.36(5): p.1083-1093.
    105. Qiao, L., L. Zhang, and S. Chen, An empirical study of two typical locality preservinglinear discriminant analysis methods. Neurocomputing,2010.73(10): p.1587-1594.
    106. Lin, C.J., Projected gradient methods for nonnegative matrix factorization. Neuralcomputation,2007.19(10): p.2756-2779.
    107. Bishop, C.M. and SpringerLink, Pattern recognition and machine learning. Vol.4.2006:springer New York.
    108. Liu, J. and J. Ye., Efcient euclidean projections in linear time, in In Proceedings of the26th Annual International Conference on Machine Learning.2009.
    109. Scholkopf, B., et al., Input space versus feature space in kernel-based methods. NeuralNetworks, IEEE Transactions on,1999.10(5): p.1000-1017.
    110. Xiong, H., M. Swamy, and M.O. Ahmad, Optimizing the kernel in the empirical featurespace. Neural Networks, IEEE Transactions on,2005.16(2): p.460-474.
    111. Li, Y.F., J.T. Kwok, and Z.H. Zhou. Semi-supervised learning using label mean. inProceedings of the26th International Conference on Machine Learning (ICML'09).2009:ACM.
    112. Mallapragada, P.K., et al., Semiboost: Boosting for semi-supervised learning. IEEETransactions on Pattern Analysis and Machine Intelligence,2009.31(11): p.2000-2014.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700