基于特征空间邻域结构分析的故障识别方法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
故障诊断对于提高设备运行的可靠性起着至关重要的作用。故障诊断系统要求能够快速有效的判断故障位置、类型、性质以及严重性等,并要求具有实时性、在线更新以及准确性。基于知识发现的方法是故障诊断方法中最有效的方法之一。目前的故障识别模块在稳定性以及准确度方面还难以满足应用要求,本文从故障特征子空间的探索与利用角度出发,在邻域粒化后寻找保持原始特征空间近似能力的多个属性约简,然后通过优化方法融合不同约简的信息,以提高故障识别的泛化性能与稳定性。从以下几方面进行了探索:
     第一,提出了基于邻域辨识矩阵的属性约简方法以及基于样本对选择的快速约简方法。将基于辨识矩阵的约简方法引入到邻域粗糙集中,建立了邻域辨识关系下的属性约简方法。在基于辨识矩阵的约简方法中,仅最小元素影响约简的结果,因此给出了通过寻找最小元素求得属性约简的快速方法。同时分析了邻域大小对于属性约简的影响,检验了约简评价指标的有效性。
     第二,提出了基于邻域属性依赖度寻找所有约简的方法和邻域随机约简方法。邻域属性依赖度已被用于构造寻找单个约简的算法,通过幂集的形式可利用属性依赖度求得所有约简。为了寻求一种快速高效的多约简求解方法,构建了基于属性依赖度的随机约简算法。这两种属性约简方法不仅适用于邻域粗糙集,也可以拓展到经典粗糙集以及模糊粗糙集中。
     第三,提出了基于间隔分布熵的集成学习方法。本文提出了间隔分布熵的概念,以表示间隔分布的均匀程度,在最大化间隔的同时最大化间隔分布熵。由此设计了相应的集成学习方法并检验了其分类性能和间隔分布的变化。
     第四,提出了基于间隔分布优化正则化的集成学习方法。多分类器决策融合问题本质上是一个特殊的分类问题,从分类器设计的角度去解决分类集成中的权学习问题是本文的主要思路。通过把最小化融合损失和正则化学习结合起来,提出了基于平方损失、logistic损失以及线性损失的正则化集成学习方法,并给出了平方损失对应的泛化性能的界。测试了所提出方法的分类性能、间隔分布的变化以及不同优化目标下学习的权值,检验了该方法的有效性。
     第五,提出了基于邻域约简集成的故障识别方法。从单个故障特征子空间学到的判别函数稳定性和泛化性能较差,,把不同特征子空间的信息集成起来,可提高故障识别的稳定性以及准确性。本文通过邻域随机约简获得不同的故障可分子空间,并通过基于间隔分布优化的方式集成从这些子空间建立的分类函数,在齿轮裂纹故障中检验了提出方法的有效性。
     本文的研究统一了基于邻域粗糙集的属性约简方法,由于最小邻域可分子空间泛化性能以及稳定性方面的局限,提出了基于间隔分布优化的邻域约简集成学习方法。提出了间隔分布熵的概念,通过优化间隔分布熵改变间隔分布。通过把最小化融合损失和正则化学习结合起来,提出了三种损失函数不同正则和约束下的基于间隔分布优化的集成学习方法。
Fault diagnosis plays a crucial role in improving the reliability of equipments. Faultdiagnosis users expect that the fault location, fault classes and the trend could be effec-tively and rapidly judged. Meanwhile, the instantaneity, online update ability and accu-racy are essential. Fault analysis based on knowledge discovery is one of the most effec-tive ways. In view of the constraint of the stability and the accuracy for the present faultrecognition module in practical application, in this paper, from the angle of explorationand usage of fault feature subspace, multiple reducts that keep the approximation abilityof original feature space can be obtained, and then information in different reducts can befused through optimization methods to improve generalization capability and stability offault recognition. The main contributions of the work are listed as follows:
     Firstly, attribute reduction based on neighborhood discernibility matrix and a fastmethod based on sample pair selection are put forward. Attribute reduction based on dis-cernibility matrix is introduced into neighborhood rough set. As to the method based ondiscernibility matrix, only the minimum elements in the matrix are useful for attributereduction. Hence, reducts can be found by looking for minimum elements. Besides, theimpact of neighborhood size on attribute reduction is analyzed. Finally, test the effective-ness of reduct evaluation indexes.
     Secondly, the method finding all reducts and randomized reduction based on neigh-borhood attribute dependency are constructed. Neighborhood attribute dependence hasalready been used to construct the algorithm looking for a single reduct. In fact, powersets can be used to find all reducts based on attribute dependency. To find a fast and effec-tive methods finding multiple reducts, neighborhood randomized reduction is proposed.The two attribute reduction methods are not only applicable to neighborhood rough set,but can be expanded to classic rough sets and fuzzy rough sets.
     Thirdly, ensemble learning method based on margin distribution entropy is proposed.In this paper, the concept of margin distribution entropy is put forward to indicate the uni-formity degree of margin distribution. To maximize margin and simultaneously maximizemargin distribution entropy, an ensemble learning method is designed. The classificationperformance and the variation of margin distribution are examined.
     Fourth, an ensemble learning method based on margin distribution optimization andregularization is proposed. Multiple Classifiers fusion in essence is a special classifi-cation problem. From the angle of classifier design to solve the problem of ensemblelearning is the main idea of this paper. By combining fusion loss minimization and regu-larized learning together, square loss, logistic loss and linear loss are respectively used toconstruct regularization based ensemble learning methods. The boundary of generationability for square loss is given. The classification performance, variation of margin distri-bution and learned weights as to different optimization objectives are tested to prove theeffectiveness of the proposed method.
     Finally, due to the poor stability and generation ability of the discrimination functionlearned from a single fault feature subspace , by fusing the information in different fea-ture subspaces, it can greatly enhance the stability and accuracy of the fault recognition.Different feature subspaces are obtained through the neighborhood randomized reductionand then they are integrated by margin distribution optimization. The effectiveness of thismethod is tested in gears crack level recognition.
     In this paper, attribution reduction methods based on neighborhood rough set areunified. Due to the limitation of generalization performance and stability for minimalneighborhood separable subspace, neighborhood reducts ensemble leaning based on mar-gin distribution optimization is constructed. The margin distribution entropy is proposedand optimized to change margin distribution. By combining fusion loss minimization andregularized learning, ensemble learning methods based on three loss functions, differentregularization items and constraints are put forward.
引文
1文福拴,韩祯祥,田磊,等.基于遗传算法的电力系统故障诊断的解析模型与方法──第一部分:模型与方法[J].电力系统及其自动化学报, 1998,10(3):1–7.
    2刘占生,唐炳照.小波分析和分形几何在转子动静碰摩故障诊断中的应用[J].哈尔滨工业大学学报, 1999, 31(001):55–56.
    3刘占生,张新江.转子轴心轨迹故障诊断特征识别方法研究[J].哈尔滨工业大学学报, 1998, 30(006):22–25.
    4 F. Tay, L. Shen. Fault Diagnosis Based on Rough Set Theory[J]. EngineeringApplications of Artificial Intelligence, 2003, 16(1):39–43.
    5 J. Peng, C. Chien, T. Tseng. Rough Set Theory for Data Mining for Fault Diagnosison Distribution Feeder[C]//IEE Proceedings-Generation, Transmission and Distri-bution. 2004, 151:689–697.
    6 L. Dong, D. Xiao, Y. Liang, et al. Rough Set and Fuzzy Wavelet Neural Net-work Integrated with Least Square Weighted Fusion Algorithm Based Fault Diag-nosis Research for Power Transformers[J]. Electric power systems research, 2008,78(1):129–136.
    7梁霖,徐光华.基于克隆选择的粗糙集属性约简方法[J].西安交通大学学报,2005, 39(11):1231–1235.
    8刘金福,于达仁,胡清华,等.基于加权粗糙集的代价敏感故障诊断方法[J].中国电机工程学报, 2007, 27(23):93–99.
    9韩琳,薛静,邓正宏,等.基于遗传算法的粗糙集约简在故障诊断中的应用[J].COMPUTER MEASUREMENT AND CONTROL, 2009, 17(10).
    10唐岩,赵英俊,陈士涛,等.基于模糊粗糙集理论的地空导弹武器装备故障诊断研究[J].战术导弹技术, 2010, (002).
    11郭庆琳,郑玲.基于模糊粗糙集数据挖掘的汽轮机组故障诊断研究[J].中国电机工程学报, 2007, 27(8):81–87.
    12熊浩,李卫国,畅广辉,等.模糊粗糙集理论在变压器故障诊断中的应用[J].中国电机工程学报, 2008, 28(7):141–147.
    13 X. Zhao, Q. Hu, Y. Lei, et al. Vibration-based Fault Diagnosis of Slurry PumpImpellers Using Neighbourhood Rough Set Models[J]. Proceedings of the Institu-tion of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science,2010, 224(4):995–1006.
    14李爰媛,孟相如,张立,等.基于数值型属性约简的Svm网络故障诊断[J].计算机工程, 2009, 35(7):273–276.
    15赵勇,方宗德,王侃伟,等.邻域粗糙集在轮对踏面缺陷图像特征选择的应用[J].计算机测量与控制, 2008, 16(011):1730–1731.
    16孙涵,诸克军.基于邻域粗糙集与支持向量机的油层识别研究[J].计算机工程与应用, 2008, 44(030):219–222.
    17慕昱,夏虹,刘永阔.基于邻域粗糙集和决策树算法的核电厂故障诊断方法[J].原子能科学技术, 2011, 1.
    18 Z. Bai-ting, C. Xi-jun, Z. Qing-shuang. Neighborhood Rough Set Based FaultDiagnosis for Turntable[J]. Journal of Chinese Inertial Technology, 2009.
    19 A. Skowron, C. Rauszer. The Discernibility Matrices and Functions in InformationSystems[J]. Intelligent Decision Support, 1992, 362.
    20 Y. Yao, Y. Zhao. Discernibility Matrix Simplification for Constructing AttributeReducts[J]. Information Sciences: an International Journal, 2009, 179(7):867–882.
    21 J. Wang, J. Wang. Reduction Algorithms Based on Discernibility Matrix: TheOrdered Attributes Method[J]. Journal of Computer Science and Technology, 2001,16(6):489–504.
    22 D. Chen, S. Zhao, L. Zhang, et al. Sample Pair Selection for Attribute Reductionwith Rough Set[J]. IEEE Transactions on Knowledge and Data Engineering, 2011.
    23 D. Chen, X. Zhang, E. Tsang, et al. Sample Selection with RoughSet[C]//International Conference on Machine Learning and Cybernetics(ICMLC)2010. 1:291–295.
    24 Q. Hu, P. Zhu, J. Liu, et al. Feature Selection via Maximizing Fuzzy Dependency[J].Fundamenta Informaticae, 2010, 98(2):167–181.
    25 Q. Hu, D. Yu, W. Pedrycz, et al. Kernelized Fuzzy Rough Sets and Their Applica-tions[J]. IEEE Transactions on Knowledge and Data Engineering, 2010.
    26 E. Tsang, D. Chen, D. Yeung, et al. Attributes Reduction Using Fuzzy RoughSets[J]. IEEE Transactions on Fuzzy Systems, 2008, 16(5):1130–1141.
    27 J. Wang, D. Miao. Analysis on Attribute Reduction Strategies of Rough Set[J].Journal of computer science and technology, 1998, 13(2):189–192.
    28 Q. Hu, D. Yu, Z. Xie. Neighborhood Classifiers[J]. Expert systems with applica-tions, 2008, 34(2):866–876.
    29 M. Dash, H. Liu. Consistency Based Search in Feature Selection[J]. Artificialintelligence, 2003, 151(1-2):155–176.
    30 Q. Hu, D. Yu, J. Liu, et al. Neighborhood Rough Set Based Heterogeneous FeatureSubset Selection[J]. Information sciences, 2008, 178(18):3577–3594.
    31 K. Il, S. Kun. A Structural Equation Modeling Approach to Generate Explanationsfor Induced Rules[J]. Expert Systems with Applications, 1996, 10(3-4):403–416.
    32 G. Valentini, F. Masulli. Ensembles of Learning Machines[J]. Neural Nets, 2002:3–
    33 L. Hansen, P. Salamon. Neural Network Ensembles[J]. IEEE Transactions onPattern Analysis and Machine Intelligence, 1990, 12(10):993–1001.
    34 H. Liu, G. Chen, G. Song, et al. Analog Circuit Fault Diagnosis Using BaggingEnsemble Method with Cross-validation[C]//ICMA 2009:4430–4434.
    35 Y. Li, Y. Cal, R. Yin, et al. Fault Diagnosis Based on Support Vector MachineEnsemble[C]//Proceedings of 2005 International Conference on Machine Learningand Cybernetics. 2005, 6:3309–3314.
    36 Q. Hu, D. Yu, Z. Xie, et al. Eros: Ensemble Rough Subspaces[J]. Pattern recogni-tion, 2007, 40(12):3728–3739.
    37刘天羽,李国正.齿轮故障不均衡分类问题的研究[J].计算机工程与应用,2010, 46(020):146–148.
    38汪庆华,邓东花,万宏强.基于多征兆域的异构集成故障诊断策略[J].西安工业大学学报, 2010, 30(004):319–324.
    39 Z. Zhou, Y. Yu. Ensembling Local Learners Throughmultimodal Perturbation[J].IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2005,35(4):725–735.
    40 L. Breiman. Bagging Predictors[J]. Machine learning, 1996, 24(2):123–140.
    41 T. Ho. The Random Subspace Method for Constructing Decision Forests[J]. IEEETransactions on Pattern Analysis and Machine Intelligence, 1998, 20(8):832–844.
    42 X. Wang, X. Tang. Using Random Subspace to Combine Multiple Features forFace Recognition[C]//Automatic Face and Gesture Recognition, 2004. Proceed-ings. Sixth IEEE International Conference on. 2004:284–289.
    43 D. Tao, X. Tang, X. Li, et al. Asymmetric Bagging and Random Subspace forSupport Vector Machines-based Relevance Feedback in Image Retrieval[J]. IEEETransactions on Pattern Analysis and Machine Intelligence, 2006:1088–1099.
    44 Y. Freund, R. Schapire. A Desicion-theoretic Generalization of On-line Learningand an Application to Boosting[C]//Computational learning theory. 1995:23–37.
    45 J. Friedman, T. Hastie, R. Tibshirani. Special Invited Paper. Additive LogisticRegression: A Statistical View of Boosting[J]. The annals of statistics, 2000,28(2):337–374.
    46 C. Domingo, O. Watanabe. Madaboost: A Modification of Ad-aboost[C]//Proceedings of the Thirteenth Annual Conference on ComputationalLearning Theory. 2000:180–189.
    47 A. Grove, D. Schuurmans. Boosting in the Limit: Maximizing the Margin ofLearned Ensembles[C]//Proceedings of the National Conference on Artificial In-telligence. 1998:692–699.
    48 A. Demiriz, K. Bennett, J. Shawe-Taylor. Linear Programming Boosting via Col-umn Generation[J]. Machine Learning, 2002, 46(1):225–254.
    49 R. Schapire, Y. Freund, P. Bartlett, et al. Boosting the Margin: A New Expla-nation for the Effectiveness of Voting Methods[J]. The annals of statistics, 1998,26(5):1651–1686.
    50 L. Breiman. Prediction Games and Arcing Algorithms[J]. Neural computation,1999, 11(7):1493–1517.
    51 L. Wang, M. Sugiyama, C. Yang, et al. On the Margin Explanation of BoostingAlgorithms[C]//Proceedings of the 21st Annual Conference on Learning Theory.2008.
    52 A. Garg, D. Roth. Margin Distribution and Learning[C]//MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-. 2003, 20:210.
    53 H. Lodhi, G. Karakoulas, J. Shawe-Taylor. Boosting the Margin Distribution[J]. In-telligent Data Engineering and Automated Learning―IDEAL 2000. Data Mining,Financial Engineering, and Intelligent Agents, 2009:54–59.
    54 J. Shawe-Taylor, N. Cristianini. Robust Bounds on Generalization from the Mar-gin Distribution[C]//4th European Conference on Computational Learning Theory.1999.
    55 C. Shen, H. Li. Boosting Through Optimization of Margin Distributions[J]. IEEETransactions on Neural Networks, 2010, 21(4):659–666.
    56 C. Shen, H. Li. On the Dual Formulation of Boosting Algorithms[J]. IEEE trans-actions on pattern analysis and machine intelligence, 2010:2216–2231.
    57 S. Rosset, J. Zhu, T. Hastie. Boosting as a Regularized Path to a Maximum MarginClassifier[J]. The Journal of Machine Learning Research, 2004, 5:941–973.
    58 D. Partridge, W. Yates. Engineering Multiversion Neural-net Systems[J]. NeuralComputation, 1996, 8(4):869–893.
    59 D. Ruta, B. Gabrys. Classifier Selection for Majority Voting[J]. Information fusion,2005, 6(1):63–81.
    60 D. Ruta, B. Gabrys. Analysis of the Correlation between Majority Voting Error andthe Diversity Measures in Multiple Classifier Systems[C]//Proceedings of the 4thInternational Symposium on Soft Computing. 2001:1824–025.
    61胡清华.混合数据知识发现的粗糙计算模型和算法[D].哈尔滨:哈尔滨工业大学, 2008:17–20.
    62 C. Fuchs, A. Peres. Quantum-state Disturbance Versus Information Gain: Un-certainty Relations for Quantum Information[J]. Physical Review A, 1996,53(4):2038.
    63 R. Gilad Bachrach, A. Navot, N. Tishby. Margin Based Feature Selection-theoryand Algorithms[C]//Proceedings of the twenty-first international conference on Ma-chine learning. 2004:43.
    64 J. Metcalfe. Competition, Fisher’s Principle and Increasing Returns in the SelectionProcess[J]. Journal of Evolutionary Economics, 1994, 4(4):327–346.
    65 N. Nagelkerke. A Note on a General Definition of the Coefficient of Determina-tion[J]. Biometrika, 1991, 78(3):691.
    66 Q. Hu, S. An, D. Yu. Soft Fuzzy Rough Sets for Robust Feature Evaluation andSelection[J]. Information Sciences, 2010.
    67 L. Wang, J. Zhu, H. Zou. Hybrid Huberized Support Vector Machines for Microar-ray Classification and Gene Selection[J]. Bioinformatics, 2008, 24(3):412.
    68 Z. Pawlak. Rough Sets[J]. International Journal of Parallel Programming, 1982,11(5):341–356.
    69 J. Liu, Q. Hu, D. Yu. A Weighted Rough Set Based Method Developed for ClassImbalance Learning[J]. Information Sciences, 2008, 178(4):1235–1256.
    70 Q. Hu, J. Liu, D. Yu. Mixed Feature Selection Based on Granulation and Approxi-mation[J]. Knowledge-Based Systems, 2008, 21(4):294–304.
    71 J. Lin. Divergence Measures Based on the Shannon Entropy[J]. IEEE Transactionson Information Theory, 1991, 37(1):145–151.
    72 J. Bazan, H. Nguyen, S. Nguyen, et al. Rough Set Algorithms in ClassificationProblem[C]//Rough set methods and applications. 2000:49–88.
    73 J. Wroblewski. Finding Minimal Reducts Using Genetic Algo-rithms[C]//Proceedings of Second International Joint Conference on InformationScience. 1995:186–189.
    74 A. Bjorvand, J. Komorowski. Practical Applications of Genetic Algorithms forEfficient Reduct Computation[J]. Wissenschaft and Technik Verlag, 1997, 4:601–606.
    75 P. Murphy, D. Aha. Uci Repository of Machine Learning Databases–a Machine-readable Repository[J]. 1995.
    76 K. Weinberger, J. Blitzer, L. Saul. Distance Metric Learning for Large MarginNearest Neighbor Classification[C]//In NIPS. 2006.
    77 K. Hess, M. Abbruzzese, R. Lenzi, et al. Classification and Regression Tree Anal-ysis of 1000 Consecutive Patients with Unknown Primary Carcinoma[J]. Clinicalcancer research, 1999, 5(11):3403.
    78 R. Fan, K. Chang, C. Hsieh, et al. Liblinear: A Library for Large Linear Classifi-cation[J]. The Journal of Machine Learning Research, 2008, 9:1871–1874.
    79 J. Quinlan. C4. 5: Programs for Machine Learning[M]. Morgan Kaufmann, 1993.
    80 Q. Hu, L. Zhang, D. Chen, et al. Gaussian Kernel Based Fuzzy Rough Sets: Model,Uncertainty Measures and Applications[J]. International journal of approximatereasoning, 2010, 51(4):453–471.
    81 A. Fraser, H. Swinney. Independent Coordinates for Strange Attractors from MutualInformation[J]. Physical Review A, 1986, 33(2):1134.
    82 I. Bialynicki-Birula, J. Mycielski. Uncertainty Relations for Information Entropy inWave Mechanics[J]. Communications in Mathematical Physics, 1975, 44(2):129–132.
    83 J. Rodriguez, L. Kuncheva, C. Alonso. Rotation Forest: A New Classifier Ensem-ble Method[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006:1619–1630.
    84 G. Ratsch, T. Onoda, K. Muller. Soft Margins for Adaboost[J]. Machine Learning,2001, 42(3):287–320.
    85 K. Glocer. Entropy Regularization and Soft Margin Maximization[J]. 2009.
    86 M. Do, M. Vetterli. Wavelet-based Texture Retrieval Using Generalized GaussianDensity and Kullback-leibler Distance[J]. IEEE Transactions on Image Processing,2002, 11(2):146–158.
    87 T. Van Gestel, J. Suykens, B. Baesens, et al. Benchmarking Least Squares SupportVector Machine Classifiers[J]. Machine Learning, 2004, 54(1):5–32.
    88 K. Koh, S. Kim, S. Boyd. An Interior-point Method for Large-scale L1-regularizedLogistic Regression[J]. Journal of Machine learning research, 2007, 8(8):1519–1555.
    89 S. Kim, K. Koh, M. Lustig, et al. An Interior-point Method for Large-scale L1-regularized Least Squares[J]. IEEE Journal of Selected Topics in Signal Processing,2007, 1(4):606–617.
    90 J. Zhu, S. Rosset, T. Hastie, et al. 1-norm Support Vector Machines[C]//Advancesin Neural Information Processing Systems 16: Proceedings of the 2003 Conference.2003.
    91 J. Wright, A. Yang, A. Ganesh, et al. Robust Face Recognition via Sparse Rep-resentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008:210–227.
    92 Y. Lin. A Note on Margin-based Loss Functions in Classification[J]. Statistics andprobability letters, 2004, 68(1):73–82.
    93 S. Rosset, J. Zhu, T. Hastie. Margin Maximizing Loss Functions[J]. Ann Arbor,1001:48109.
    94 L. Mason, J. Baxter, P. Bartlett, et al. Boosting Algorithms as Gradient Descent inFunction Space[C]//Proc. NIPS. 1999:512–518.
    95 T. Hastie, S. Rosset, R. Tibshirani, et al. The Entire Regularization Path for theSupport Vector Machine[J]. The Journal of Machine Learning Research, 2004,5:1391–1415.
    96 A. Hoerl, R. Kennard. Ridge Regression: Biased Estimation for NonorthogonalProblems[J]. Technometrics, 1970, 12(1):55–67.
    97 X. Nguyen, M. Wainwright, M. Jordan. On Surrogate Loss Functions and F-divergences[J]. The Annals of Statistics, 2009, 37(2):876–904.
    98 J. Ramsey. Tests for Specification Errors in Classical Linear Least-squares Regres-sion Analysis[J]. Journal of the Royal Statistical Society. Series B (Methodologi-cal), 1969, 31(2):350–371.
    99 J. Liu, S. Ji, J. Ye. Slep: Sparse Learning with Efficient Projections[C]. ArizonaState University, 2009. http://www.public.asu.edu/~jye02/Software/SLEP.
    100 J. Tropp, A. Gilbert. Signal Recovery from Random Measurements via Or-thogonal Matching Pursuit[J]. IEEE Transactions on Information Theory, 2007,53(12):4655–4666.
    101 K. Koh, S. Kim, S. Boyd. L1 Ls: A Matlab Solver for Large-scale ?1-regularizedLeast Squares Problems[J]. 2008.
    102 Q. Hu, D. Yu, Z. Xie, et al. Fuzzy Probabilistic Approximation Spaces and TheirInformation Measures[J]. IEEE Transactions on Fuzzy Systems, 2006, 14(2):191–201.
    103 Y. Lei, M. Zuo. Gear Crack Level Identification Based on Weighted K NearestNeighbor Classification Algorithm[J]. Mechanical Systems and Signal Processing,2009, 23(5):1535–1547.
    104 S. Loutridis. Damage Detection in Gear Systems Using Empirical Mode Decom-position[J]. Engineering Structures, 2004, 26(12):1833–1841.
    105周瑞.基于第二代小波的机械故障信号处理方法研究[D]哈尔滨工业大学,2009.
    106周传华,王清,吴科主,等.平均1-依赖决策树集成算法[J].电子学报, 2010, 2.
    107 P. Zhu, Q. Hu, Y. Yang. Weighted Nearest Neighbor Classification via Maximiz-ing Classification Consistency[C]//Rough Sets and Current Trends in Computing.2010:347–355.
    108 L. Reyzin, R. Schapire. How Boosting the Margin Can Also Boost Classifier Com-plexity[C]//Proceedings of the 23rd international conference on Machine learning.2006:753–760.
    109 Q. Hu, Z. Xie, D. Yu. Hybrid Attribute Reduction Based on a Novel Fuzzy-roughModel and Information Granulation[J]. Pattern Recognition, 2007, 40(12):3509–3521.
    110 V. Vapnik, E. Levin, Y. Cun. Measuring the Vc-dimension of a Learning Ma-chine[J]. Neural Computation, 1994, 6(5):851–876.
    111 T. Ho, J. Hull, S. Srihari. Decision Combination in Multiple Classifier Systems[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994, 16(1):66–
    112 K. Woods, W. Kegelmeyer Jr, K. Bowyer. Combination of Multiple ClassifiersUsing Local Accuracy Estimates[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 1997, 19(4):405–410.
    113 R. Schapire, Y. Singer. Improved Boosting Algorithms Using Confidence-ratedPredictions[J]. Machine learning, 1999, 37(3):297–336.
    114 J. Shawe-Taylor, N. Cristianini. Margin Distribution Bounds on Generaliza-tion[C]//Computational Learning Theory. 1999:638–639.
    115 E. Bauer, R. Kohavi. An Empirical Comparison of Voting Classification Algo-rithms: Bagging, Boosting, and Variants[J]. Machine learning, 1999, 36(1):105–139.
    116 L. Breiman. Random Forests[J]. Machine learning, 2001, 45(1):5–32.
    117 V. Koltchinskii, D. Panchenko. Empirical Margin Distributions and Bounding theGeneralization Error of Combined Classifiers[J]. The Annals of Statistics, 2002,30(1):1–50.
    118 A. Garg, D. Roth. Margin Distribution and Learning Algorithms[C]//Proceedingsof the Fifteenth International Conference on Machine Learning (ICML):210–217.
    119 G. Fumera, F. Roli, A. Serrau. A Theoretical Analysis of Bagging as a LinearCombination of Classifiers[J]. IEEE transactions on pattern analysis and machineintelligence, 2008:1293–1299.
    120 P. Yang, Q. Liu, D. Metaxas. Rankboost with L1 Regularization for Facial Ex-pression Recognition and Intensity Estimation[C]//2009 IEEE 12th InternationalConference on Computer Vision. 2009:1018–1025.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700