基于公理模糊集与支持向量机的知识发现方法与应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
公理模糊集(Axiomatic Fuzzy Sets,简称AFS)理论,是一种处理模糊信息的新语义方法,其本质是研究如何把蕴涵在训练样本、原始数据或数据库中的内在规律和模式转化到模糊集及其逻辑运算中的一种新的语义方法,现已经被应用于形式概念分析、聚类分析、模糊分类器、知识表示等方面。支持向量机(Support Vector Machine, SVM)是基于统计学习理论提出的一种新型有监督模式识别方法。SVM较好地解决了小样本、高维数及非线性等实际问题,具有拟合精度高、选择参数少、推广能力强和全局最优等特点。SVM现已成为机器学习领域中新的研究热点。本文聚焦于应用AFS和SVM理论研究知识发现与表示领域中的热点问题。主要研究工作包括:
     1.本文首先应用AFS理论在无监督条件下提出了模糊特征选择、主概念选择算法,它们能够为知识发现选取出重要的特征和简单概念;然后提出了一个概念范畴化算法,该方法能够有效地将具有很高相关程度的简单概念归为一类,这在人工智能领域是一个非常重要的问题,在实际问题中,它可以对数据集进行降维,从而避免维数灾难;最后提出了样本特征描述算法,该算法能够提取出样本最主要的特征,这样的描述是非常简单的,在识别问题中,它比复杂的模糊描述更实用有效。
     2.通过详尽地研究AFS模糊逻辑聚类分析算法(X. D. Liu, W. Wang and T. Y. Chai. IEEE Transactions on Systems, Man, Cybernetics,2005)及其在真实数据上的实用性,发现算法中存在一些缺陷,针对这些缺陷,本文在原算法基础上提出了一个控制样本模糊描述粗糙程度的算法,增加了进一步完善聚类结果的过程,并改进了原始AFS聚类有效性指标。在公开数据集Iris上的测试结果显示了新方法的有效性。
     3.聚类分析是知识发现领域中的热点问题,为了评价AFS理论框架下的特征选择、主概念选择、概念范畴化和样本特征描述这四项技术的有效性,本文基于这四项技术提出了一个新的AFS模糊聚类分析算法。该算法中求每类模糊描述的新方法非常简单,每类的模糊描述仅仅是简单概念的交集。这样的描述简单,且具有很好的可解释性。同时它使得样本隶属于它所属类的程度较大,隶属于其他类的程度会非常小,甚至趋于0。这使得类与类间的边界能够尽可能的清晰。在几组UCI数据集上的聚类结果显示,该算法获得的聚类准确率是可以与FCM,κ-means等传统聚类算法的聚类结果相比较的,甚至优于这些算法的结果。实验结果进一步显示在合理的范围内选择参数,聚类结果非常稳定,即该聚类算法对于参数的选取是不敏感的。
     4.应用马氏距离提出了一个新颖的基于密度的聚类算法DBCAMM。该算法的创新点在于:一是替代经典基于密度聚类算法DBSCAN算法中常用的欧氏距离,该算法采用了马氏距离;二是它给出了一个有效地合并领导者和追随者的方法。此外,DBCAMM算法使用局部子类密度信息来合并子类,从而克服了DBSCAN算法中全局密度参数问题。在人工数据集上的实验结果显示了该算法的有效性。该算法和DBSCAN算法在一些典型图像上的分割结果显示出DBCAMM算法能够制造出更优秀的可视效果。
     5.提出了一个模糊规则极其精简的分类算法PFRAS,它首先应用SVM删除了训练集中的离群点,然后基于AFS理论找到带有明确语义解释的模糊集来描述每类。该算法还具有另外两个优点,一是该算法获得的每个规则仅仅是一些简单概念的交集,因此规则更为简单,二是不需要调整参数来优化规则。与其他方法相比,由于在PFRAS算法获得的结果中,每类对应更少的规则(对于大部分数据集,每类仅对应一条规则),因此本文提供了一个更简洁,可理解和准确的分类模型。
Axiomatic Fuzzy Sets (AFS) theory is a new method to deal with fuzzy information, which provides an effective tool to concert the information in the training examples and databases into the membership functions and their fuzzy logic operations. Support Vec-tor Machine(SVM) is a new supervised pattern recognition method based on Statistical Learning Theory. It has advantages such as high classification accuracies, few parameters, global optimums and strong generalization performances:it becomes a new research area in the field of machine learning research. This thesis focuses on some popular problems which are often encountered in knowledge discovery and representation based on AFS and SVM theory. Main topics include:
     1. In the framework of AFS theory, this paper firstly proposes a fuzzy feature selection algorithm and a principle concept selection algorithm in unsupervised learning, which can extract the important features and simple concepts for knowledge discovery. Secondly, it presents a concept categorization approach which is a new and important technique in the artificial intelligence area. It can cluster the simple concepts which have the great correlations to one class. Finally, it gives an algorithm for finding the sample characteristic description. It can extract the salient characteristic of sample, such description is very simple and it is more effective and practical than the complex description in pattern recognition issue.
     2. By an exhaustive study of the clustering algorithm proposed by X. Liu et al. in IEEE Transactions on Systems, Man, Cybernetics,2005 and its effectiveness on real datasets, some drawbacks are discovered. For these drawbacks, firstly, this paper proposes an algorithm to control the rough extent of fuzzy descriptions of objects:secondly, adds a refinement step, i.e.. the clusters can be further refined by the fuzzy description of each cluster; finally, improves the original AFS cluster validity index. The well known real-world Iris data set is used to illustrate the effectiveness of the new clustering algorithm.
     3. In order to evaluate the effectiveness of the feature selection, the principle concept selection, the concept categorization and the characteristic description algorithm proposed in the framework of AFS theory, a new fuzzy clustering algorithm based on AFS theory is proposed via these new techniques. The fuzzy description of each cluster obtained from the proposed algorithm is simply a conjunction of some simple concepts. Not only the membership degree of a sample belonging to its cluster is large, but also the membership degrees of this sample belonging to other clusters are as small as possible, even close to 0. Thus, the boundary among the clusters is more clearer with such fuzzy descriptions. Such description is very simple, and easily understandable for the users since each class corresponds to much fewer rules. Several benchmark data sets are used for this study. Clustering accuracies are comparable with or superior to the conventional algorithms such as FCM,κ-means. The practical experience has further indicated that our clustering algorithm is not sensitive to parameters if the reasonable parameters are selected.
     4. A new density-based clustering algorithm via using the Mahalanobis metric is proposed. There are two novelties for the proposed algorithm:One is to adopt the Ma-halanobis metric as distance measurement and the other is its effective merging approach for leaders and followers. In order to overcome the unique density issue in DBSCAN, we propose an approach to merge the sub-clusters by using the local sub-cluster density information. Extensive experiments on some synthetic datasets show the validity of the proposed algorithm. Further the segmentation results on some typical images by using the proposed algorithm and DBSCAN are presented and it is shown that the proposed algorithm can produce much better visual results than DBSCAN.
     5. A classification method that is based on easily interpretable fuzzy rules is pro-posed, it fully capitalizes on the two key technologies, namely pruning the outliers in the training data by SVMs; finding a fuzzy set with definite linguistic interpretation to describe each class based on AFS theory. Compared with other fuzzy rule-based methods, the proposed models are usually more compact and easily understandable for the users since each class is described by much fewer rules. The proposed method also comes with two other advantages, namely, each rule obtained from the proposed algorithm is simply a conjunction of some linguistic terms, there are no parameters that are required to be tuned. The results show that the fuzzy rule-based classifier presented in this paper, of-fers a compact, understandable and accurate classification scheme. A balance is achieved between the interpretability and the accuracy.
引文
[1]陆汝铃.世纪之交的知识工程与知识科学[M].北京:清华大学出版社,2002.
    [2]Fayyad U M, Piatetsky-Shapiro G, Smith P, et al. Advances in knowledge discovery and data mining [M]. United States:MIT Press,1996.
    [3]Han J W, Kambr M. Data mining:concepts and techniques [M]. Morgan Kaufmann Publishers,2001.
    [4]张昭涛.数据挖掘聚类算法研究[D].成都:西南交通大学,2005.
    [5]Friedman J H. A recursive partitioning decision rule for nonparametric classifiers [J]. IEEE Transactions on Computers,1977:404-408.
    [6]Breiman L, Friedman J, Olshen R, et al. Classification and regression trees [M]. Mon-terery, CA, Wadsworth International Group,1984.
    [7]Quinlan J R. Unknown attribute values in induction [C]//Proceedings of 6th Interna-tional Workshop on machine learning, Ithaca, NY:1989:164-168.
    [8]Kennedy R L, Lee Y, Van R B, et al. Solving data mining problems through pattern recognition [M]. Prentice Hall, Upper Saddle River, NJ,1998.
    [9]Pyle D. Data preparation for data mining [M]. Morgan Kaufmann:San Francisco,1999.
    [10]Ross K, Srivastava D. Fast computation of sparse datacubes [C]//Proceedings of 1997 International Conference on Very Large Data Bases(VLDB'97), Athens, Greece:1977: 116-125.
    [11]Cleveland W. Visualizing data [J]. Hobart Press:Summit, NJ,1993.
    [12]Devore J L. Probability and statistics for engineering and the sciences [J]. New York: Kuxbury Press,4th edition,1995.
    [13]Gray J, Chaudhuri S, Bosworth A, et al. Data cube:A relational aggregation oper-ator generalizing group-by, cross-tab and sub-totals [J]. Data Mining and Knowledge Discovery,1997,1:29-54.
    [14]Tufte E R. Envisioning Information [M]. Graphics Press,1990.
    [15]Tufte E R. Visual Explanations:images and quantities, Evidence and Narratives [M]. Graphics Press,1997.
    [16]Keim D A. Visual techniques for exploring databases [C]. In tutorial notes,3rd Interna-tional Conference on Knowledge Discovery and Data Mining(KDD'97), Newport Beach. CA.1997.
    [17]Codd E F, Codd S B, et al. Beyond decision support [M]. Computer World,1933.
    [18]Chaudhuri S, Dayal U. An overview of data warehousing and OLAP technology [J]. ACMSIGMOD Record,1997.26:65-74.
    [19]Han J, Nishio S, Kawano H, et al. Generalization-based data mining in object-oriented databases using an object-cube model [J]. Data and Knowledge Engineering,1998,25:55-97.
    [20]Agrawal R, Imielinski T, Swami A. Mining association rules between sets of items in large databases [C]//Proceedings of 1993 ACM-SIGMOD International Conference on Management of Data(SIGMOD'93), Washington, DC,1993:207-216.
    [21]Agrawal R, Stikant R. Fast algorithms for mining association rules [C]//Proceedings of 1994 International Conference on Very Large Data Bases(VLDB'94), Santiago, Chile, 1994:487-499.
    [22]Weiss S M, Kulikowski C A. Computer systems that learn:Classification and prediction methods from statistics, neural nets, machine learning, and expert systems [M]. San Mateo, CA:Morgan Kaufmann,1991.
    [23]Murthy S K. Automatic construction of decision trees from data:A multidisciplinary survey [J]. Data Mining and Knowledge Discovery,1998,2:245-389.
    [24]Jain A K, Murty M N, Flynn P J. Data clustering:A review [J]. ACM Computing Surveys,1999,31(3):264-323.
    [25]Kaufman L, Rousseeuw P L. Finding groups in data:an introduction to cluster analysis [M]. New York:John wiley & Sons,1990.
    [26]Knorr E M, Ng R T. Algorithms for mining distance-based outliers in large datasets [C]// Proceedings of 1998 International Conference on Very Large Data Bases(VLDB'98), New York,1998:392-403.
    [27]Arning A, Agrawal R, Raghavan P. A linear method for deviation detection in large databases [C]//Proceedings of 1996 International Conference on Data Mining and Knowledge Discovery(KDD'96), Portland, OR,1996:164-169.
    [28]Jagadish H V, Koudas N, Muthukrishnan S. Mining deviants in a time series datbaase [C]//Proceedings 1999 International Conference on Very Large Data Bases(VLDB'99), Edinburgh, UK,1999:102-113.
    [29]Ester M, Kriegel H P, Sander J, et al. Spatial data mining:A database approach [C]// Proceedings of International symposium Large Spatial Databases(SSD'97), Berlin, Ger-many,1997:47-66.
    [30]Agrawal R, Faloutsos C. Swami A. Efficient similarity search in sequence databases [C]. Proceedings of 4th International Conference on Foundations of Data Organization and Algorithms, Chicago, IL,1993.
    [31]Han J, Pei J, Mortazavi-Asl B. et al. Freespan:frequent Pattern-projected sequential Pattern mining [C]//Proceedings of 2000 International Conference on Knowledge Dis-covery and Data Mining(KDD'00), Boston, MA,2000.
    [32]贾鑫,卢昱.模糊信息处理[J].长沙:国防科技大学,1996:69-259.
    [33]孙吉贵,刘杰,赵连宇[J].聚类算法研究.软件学报,2008.19(1):48-61.
    [34]Mahalanobis P C. On the generalised distance in statistics [C]//Proceedings of the National Institute of Science of India,1936,12:49-55.
    [35]Spath H. Cluster Dissection and Analysis:Theory FORTRAN Programs, Examples [M]. Halsted Press, New York,1985.
    [36]Huang Z, Ng M A. Fuzzy k-modes algorithm for clustering large data sets with categor-ical values [J]. Data Ming and Knowledge, Discovery Ⅱ,1998,2:283-304.
    [37]Sun Y. Zhu Q W, Chen Z X. An iterative initial-points refinement algorithm for cate-gorical data clustering [J]. Pattern Recognition Letters,2002,23(7):875-884.
    [38]Bradley P S. Fayyad U M. Refining initial points for κ-means clustering [C]//Pro-ceedings of 15th Internet Conference on Machine Learning, San Francisco:Morgan Kaufmann Publishers,1998:91-99.
    [39]Ding C, He X. K-Nearest-Neighbor in data clustering:Incorporating local information into global optimization [C]//Proceedings of ACM symposium on Applied Computing, Nicosia:ACM Press,2004,584-589.
    [40]Karypis G, Han E H, Kumar V. Chameleon:Hierarchical clustering using dynamic modeling [J]. Computer,1999,32(8):68-75.
    [41]Sudipto G, Rastogi R, Shim K. CURE:An efficient clustering algorithm for large databases [J]. SIGMOD Record (ACM Special Interest Group on Management of Data), Croatian Soc Chem Eng, Zagreb, Croatia,1998:73-84.
    [42]Sudipto G, Rastogi R, Shim K. Rock:a robust clustering algorithm for categorical attributes [J]. Information Systems,2000,25(5):345-366.
    [43]Zhang T, Raghu R, Livny M. BIRCH:An efficient data clustering method for very large databases [C]. SIGMOD Record(ACM Special Interest Group on Management of Data). ACM,1996:103-114.
    [44]Jacob G, Tassa T. A hierarchical clustering algorithm based on the Hungarian method [J]. Pattern Recognition Letters,2008,29(11):1632-1638.
    [45]Loewenstein Y, Portugaly E, Fromer M, et al. Efficient algorithms for accurate hierar-chical clustering of huge datasets:tackling the entire protein space [J]. Bioinformatics, 2008,24(13):ⅰ41-ⅰ49.
    [46]Wang H, Zheng H, Azuaje F. Poisson-based self-organizing feature maps and hierarchical clustering for serial analysis of gene expression data [J]. IEEE/ACM Transactions on Computational Biology and Bioinformatics,2007,4(2):163-175.
    [47]Ester M, Kriegel H P, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise [C]//Proceedings of 2nd ACM SIGKDD, Portand, Oregon, 1996:226-231.
    [48]Sander J, Ester M, Kriegel H P. et al. Density-based clustering in spatial databases: The algorithm gdbscan and its applications [J]. Data mining and knowledge discovery. 1998,2(2):169-194.
    [49]Ankerst M, Breunig M M, Kriegel H, et al. OPTICS:ordering points to identify the clustering structure [C]//Proceedings of the ACM SIGMOD International Conference on Management of Data, ACM Press,1999:49-60.
    [50]Hinneburg A, Keim D A. An efficient approach to clustering in large multimedia databases with noise [C]//Proceedings of the 4th International Conference on Knowl-edge Discovery and Data Mining KDD'98, AAAI Press,1998:58-65.
    [51]Agrawal R, Gehrke J E, Gunopulos D, et al. Automatic subspace clustering of high di-mensional data for data mining applications [C]//Proceedings of the ACM-SIGMOD'98 International Conference on Management of Data, ACM Press,1998:94-105.
    [52]Bellman R, Kalaba R, Zadeh L A. Abstraction and pattern classification [J]. Journal of Mathematical Analysis and Applications,1966,2:581-585.
    [53]Ruspini E H. A new approach to clustering [J]. Information and Control,1969,15:22-32.
    [54]Keller J M, Gray M, Givens J. A Fuzzy k-nearest Neighbor Algorithm [J]. IEEE Tran-sanction on Systems, Man, Cybernetics,1985, SMC-15(7):580-585.
    [55]Bezdek J C. Pattern Recognition with Fuzzy Objective Function Algorithms [M]. Plenum Press, New York,1981.
    [56]Bezdek J C. A convergence theorem for the fuzzy ISODATA clustering algorithms [J]. IEEE Transactions Pattern Analysis Machine Intelligence,1980, PAMI-2(1):1-8.
    [57]Pedrycz W, Vukovich G. Logic-oriented fuzzy clustering[J]. Pattern Recognition Letters, 2002,23:1515-1527.
    [58]Pedrycz W, Sosnowski Z A. The design of decision trees in the framework of granular data and their application to software quality models [J]. Fuzzy Sets and Systems,2001, 123:271-290.
    [59]Yang M S. A survey of fuzzy clustering [J]. Mathematical and Computer Modelling 1993,18:1-16.
    [60]Yang M S, Ko C H. On cluster-wise fuzzy regression analysis [J]. IEEE Transactions Systems, Man, Cyberenetics, Part B,1997,27:1-13.
    [61]Dunn J C. Well-separated clusters and the optimal fuzzy partitions [J]. Journal of Cy-bernetics and Systems,1974,4(1):95-104.
    [62]Gustafson D E, Kessel W C. Fuzzy clustering with a fuzzy covariance matrix [C]// Proceedings of IEEE Conference on Decision Control International 17th Symposium on Adaptive Processes,1978:761-766.
    [63]Baduska R. Fuzzy Modeling for Control [M]. Kluwer Academic Publishers,1998.
    [64]Kim D W. Lee K Y, Lee D, Lee K H. Evaluation of the performance of clustering algorithm in kernel-induced feature space [J]. Pattern Recognition,2005,38:607-611.
    [65]Bouchachia A, Pedrycz W. Enhancement of fuzzy clustering by mechanisms of partial supervision [J]. Fuzzy Sets and Systems,2006,157(13):1733-1759.
    [66]Tushir M, Srivastava S. A new Kernelized hybrid c-mean clustering model with opti-mized parameters [J]. Applied Soft Computing,2010,10:381-389.
    [67]Chiang J H, Hao P Y. A new kernel-based fuzzy clustering approach:support vector clustering with cell growing [J]. IEEE Transactions on Fuzzy Systems,2003,11(4):518-527.
    [68]Shen H, Yang J, Wang S, Liu X. Attribute weighted Mercer kernel based fuzzy clustering algorithm for general non-spherical datasets [J]. Soft Computing,2006,10(11):1061-1073.
    [69]Zhang D Q, Chen S C. Clustering incomplete data using Kernel-based Fuzzy C-Means algorithm [J]. Neural Processing Letters,2003,18(3):155-162.
    [70]Zhang D, Chen S. Fuzzy clustering using kernel method [C]//Proceedings of the Inter-national Conference on Control and Automation,2002:123-127.
    [71]Zhou S, Gan J. Mercer kernel fuzzy c-means algorithm and prototypes of clusters [C]// Proceedings of the International Conference of Data Engineering and Automated Learn-ing,2004,3177:613-618.
    [72]Han J. Kambr M. Data Mining Concepts and Techniques [M]. Morgan Kaufmann Pub-lishers,2000.
    [73]Quinlan J R. Induction of decision trees [J]. Machine Leanring,1986,1:81-1061.
    [74]Quinlan J R. C4.5:Programs for machine learning [M]. Morgan Kaufmann Publishers, Inc.,1993.
    [75]Mitchell T M. Machine learning [M]. China Machine Press,2003.
    [76]Aha D, Kibler D. Instance-based learning algorithms [J]. Machine Learning,1991, 6(1):37-66.
    [77]Cooper G, Herskovits E. A Bayesian method for the induction of probabilistic networks from data [J]. Machine Learning,1992,9:309-347.
    [78]Bishop C M. Neural networks for pattern recognition [M]. Oxford University Press, England,1996.
    [79]Liu X D. The fuzzy theory based on AFS algebras and AFS structure [J]. Journal of Mathematical Analysis and Applications,1998,217:459-478.
    [80]Liu X D. The topology on AFS algebra and AFS structure [J]. Journal of Mathematical Analysis and Applications,1998,217:479-489.
    [81]Ren Y, Song M L, Liu X D. New approaches to the fuzzy clustering via AFS theory [J]. International Journal of Information and Systems Science,2007,3:307-325.
    [82]Xu X L. Liu X D. Chen Y. Applications of axiomatic fuzzy set clustering method on management strategic analysis [J]. European Journal of Operational Research,2009, 198(1):297-304.
    Liu X D, Chai T Y, Wang W, et al. Approaches to the representations and logic op-erations for fuzzy concepts in the framework of axiomatic fuzzy set theory Ⅰ,Ⅱ [J]. Information Sciences,2007,177:1007-1026,1027-1045.
    [84]Liu X D, Pedrycz W. The development of fuzzy decision trees in the framework of axiomatic fuzzy set logic [J]. Applied Soft Computing,2007,7(1-4):325-342.
    [85]Liu X D, Wang W, Chai T Y. The fuzzy clustering analysis based on AFS theory [J]. IEEE Transactions on Systems, Man and Cybernetics Part B,2005,3:1013-1027.
    [86]Liu X D, Pedrycz W. AFS theory and it's applications [M]. Spriger-Verlag, Heidelberg Press,2009.
    [87]Liu X D, Ren Y. Novel artificial intelligent techniques via AFS theory:Feature selection, concept categorization and characteristic description [J]. Applied Soft Computing,2010, 10:793-805.
    [88]Liu X D. The fuzzy sets and systems based on AFS structure, EI algebra and EII algebra [J]. Fuzzy Sets and Systems,1998,95:179-188.
    [89]Zhang Y J, Liang D Q, Tong S C. On AFS algebra part Ⅲ [J]. Information Sciences, 2004,167:263-286,287-303.
    [90]Vapnik V. Support-vector networks [J]. Machine Learning,1995,20(3):273-297.
    [91]Huang H P, Liu Y H. Fuzzy support vector machines for pattern recognition and data mining [J]. International Journal of Fuzzy Systems,2002,4(3):826-835.
    [92]Lin C F, Wang S D. Fuzzy support vector machines [J]. IEEE Transactions on Neural Networks,2002,13(2):464-471.
    [93]Vapnik V, Chapelle O. Bounds on error expectation for support vector machines [J]. Neural Computation,2000,12(9):2013-2036.
    [94]Pearson K. On lines and planes of closest fit to systems of points in space [J]. Philo-sophical Magazine,1901,2:559-572.
    [95]Jonieff I T. Principal component analysis [M]. Springer, New York,1986.
    [96]Diamantaras K I, Kung S Y. Principal component neural networks [M]. Wiley, New York,1996.
    [97]Webb A R. Statistical Pattern Recognition [M]. Oxford University Press, London,1999.
    [98]Bacauskiene M, Verikas A, Gelzinis A, Valincius D. A feature selection technique for generation of classification committees and its application to categorization of laryngeal images [J]. Pattern Recognition,2009,42:645-654.
    [99]Bhavani S D, Rani T S, Bapi R S. Feature selection using correlation fractal dimension: Issues and applications in binary classification problems [J]. Applied Soft Computing, 2008,8:555-563.
    [100]Chen C. Lee H. Chang Y. Two novel feature selection approaches for web page classifi-cation [J]. Expert Systems with Applications.2009,36:260-272.
    [101]Huang C, Dun J. A distributed PSO-SVM hybrid system with feature selection and parameter optimization [J]. Applied Soft Computing,2008,8:1381-1391.
    [102]Garcia H, Villalobos J, Pan R, Runger G. A novel feature selection methodology for automated inspection systems [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31 (7):1338-1344.
    [103]Huang K, Aviyente S. Wavelet feature selection for image classification [J]. IEEE Trans-actions on Image Processing,2008,17:1709-1720.
    [104]Marcelloni F. Feature selection based on a modified fuzzy C-means algorithm with su-pervision [J]. Information Sciences,2003,151:201-226.
    [105]Law M, Figueiredo M, Jain A K. Simultaneous feature selection and clustering using mixture models [J]. IEEE Transanctions on Pattern Analysis and Machine Intelligence, 2004,26(9):1154-1166.
    [106]Fortuna J, Capson D. Improved support vector classification using PCA and ICA feature space modification [J]. Pattern Recognition,2004,37:1117-1129.
    [107]Kirby M. Data Analysis:An Empirical Approach to Dimensionality Reduction and the Study of Pattern [M]. Wiley, New York,2000.
    [108]Ren Y, Wang X C, Liu X D. Fuzzy clustering approaches based on AFS fuzzy logic I [C]//Proceedings of the 6th World Congress on Control and Automation, Dalian, China,2006:21-23.
    [109]Blake C L, Men C J. UCI repository of machine learning databases [DB]. University of California, Imine, Department of Information and Computer Science,1998-2003. http://archive.ics.uci.edu/ml/.
    [110]Fan J, Han M, Wang J. Single point iterative weighted fuzzy C-means clustering algo-rithm for remote sensing image segmentation [J]. Pattern Recognition,2009,42:2527-2540.
    [111]Wu S, Chow T. Clustering of the self-organizing map using a clustering validity index based on inter-cluster and intra-cluster density [J]. Pattern Recognition,2004,37:175-188.
    [112]Law M, Figueiredo M, Jain A K. Simultaneous feature selection and clustering using mixture models [J]. IEEE Transanctions on Pattern Analysis and Machine Intelligence, 2004,26(9):1154-1166.
    [113]Lee H, Park K, Bien Z. Iterative fuzzy clustering algorithm with supervision to construct probabilistic fuzzy rule base from numerical data [J]. IEEE Transactions on Fuzzy Sys-tems,2008,16:263-277.
    [114]Li Y, Dong M, Hua J. Localized feature selection for clustering [J]. Pattern Recognition Letters.2008,29:10-18.
    [115]Yang X, Song Q, Wu Y. A robust deterministic annealing algorithm for data clustering [J]. Data & Knowledge Engineering,2007,62:84-100.
    [116]Jenssen R, Erdogmus D. Information cut for clustering using a gradient descent approach [J]. Pattern Recognition,2007,40:796-806.
    [117]Cord A, Ambroise C, Cocquerez J. Feature selection in robust clustering based on Laplace mixture [J]. Pattern Recognition Letters,2006,27:627-635.
    [118]Tushir M, Srivastava S. A new kernelized hybrid C-mean clustering model with opti-mized parameters [J]. Applied Soft Computing,2010,10(2):381-389.
    [119]Zhang H, Lu J. Semi-supervised fuzzy clustering:A kernel-based approach [J]. Knowledge-Based Systems,2009,22:477-481.
    [120]Gabrys B, Bargiela A. General fuzzy min-max neural network for clustering and classi-fication [J]. IEEE Transanctions on Neural Networks,2000, 11(3):769-783.
    [121]Ben-Hur A, Horn D, Siegelmann H, Vapnik V. Support vector clustering [J]. Journal of Machine Learning Research,2001,2:125-137.
    [122]Camastra F, Verri A. A novel kernel method for clustering [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(5):801-805.
    [123]Jain A K. Date Clustering:50 Years Beyond K-Means [J]. Pattern Recognition Letters, 2009,31:651-666.
    [124]Han J W, Kamber M. Data Mining Concepts and Techniques [M]. Morgan Kaufmann Publishers,2006.
    [125]Chen X, Liu W, Qiu H, Lai J. APSCAN:A Parameter Free Algorithm for Clustering [J]. Pattern Recognition Letters,2011,32(7):973-986.
    [126]Babuska R, van der Veen P J, Kaymak U. Improved covariance estimation for Gustafson-Kessel clustering [C]//Proceedings of the International Conference on Fuzzy Systems, 2002:1081-1085.
    [127]Krishnapuram R, Kim J. A note on the Gustafson-Kessel and adaptive fuzzy clustering algorithms [J]. IEEE Transactions on Fuzzy Systems,1999,7(4):453-461.
    [128]Wang W N, Zhang Y J. On Fuzzy Cluster Validity Indices [J]. Fuzzy Sets and Systems, 2007,158:2095-2117.
    [129]Domeniconi C, Gunopulos D. Adaptive nearest neighbor classification using support vector machines [J]. Advances in Neural Information Processing Systems,2001:665-672.
    [130]Hastie T, Tibshirani R. Discriminant adaptive nearest neighbor classification [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18(6):607-615.
    [131]He X. King O, Ma W Y, Li M. Zhang H J. Learning a semantic space from users relevance feedback for image retrieval [J]. IEEE Transactions on Circuits and Systems for Video Technology,2003,13(1):39-48.
    [132]Muller H, Pun T, Squire D. Learning from user behavior in image retrieval:application of market basket analysis [J]. International Journal of Computer Vision,2004,56(1- 2):65-77.
    [133]Viswanath P, Suresh Babu V. Rough-DBS CAN:A Fast Hybrid Density Based Clustering Method for Large Data Sets [J]. Pattern Recognition Letters,2009,30:1477-1488.
    [134]Sibson R. SLINK:an optimally efficient algorithm for the single-link cluster method [J]. The Computer Journal,1973,16(1):30-34.
    [135]Hsu C C, Chen C L, Su Y W. Hierarchical clustering of mixed data based on distance hierarchy [J]. Information Sciences,2007,177:4474-4492.
    [136]Dong Y, Zhuang Y, Chen K, Tai X. A hierarchical clustering algorithm based on fuzzy graph connectedness [J]. Fuzzy Sets and Systems,2006,157:1760-1774.
    [137]Beckmann N, Kriegel H P, Schneider R, Seeger B. The R*-tree:An Efficient and Robust Access Method for Points and Rectangles [C]//Proceedings of ACM SIGMOD Interna-tional Conference on Management of Data, Atlantic City, NJ, ACM Press, New York, 1990:322-331.
    [138]Graves D, Pedrycz W. Kernel-based fuzzy clustering and fuzzy clustering:A compara-tive experimental study [J]. Fuzzy Sets and Systems,2010,161:522-543.
    [139]Liew A W C, Yan H, Law N F. Image segmentation based on adaptive cluster prototype estimation [J]. IEEE Transactions on Fuzzy Systems,2005,13(4):444-453.
    [140]Fu K S, Mui J K. A survey on image segmentation [J]. Pattern Recognition,1981, 13:3-16.
    [141]Zhang Y, Chung F L, Wang S T. Robust fuzzy clustering-based image segmentation [J]. Applied Soft Computing,2009,9:80-84.
    [142]Yu Z D, Au O C, Zou R B, Yu W Y, Tian J. An adaptive unsupervised approach toward pixel clustering and color image segmentation [J]. Pattern Recognition,2010, 43:1889-1906.
    [143]Bong C W, Rajeswari M. Multi-objective nature-inspired clustering and classification techniques for image segmentation [J]. Applied Soft Computing 2011, 11(4):3271-3282.
    [144]Wang Z M, Soh Y C, Song Q, Sim K. Adaptive spatial information-theoretic clustering for image segmentation [J]. Pattern Recognition,2009.42:2029-2044.
    [145]Das S. Konar A. Automatic image pixel clustering with an improved differential evolu-tion [J]. Applied Soft Computing,2009,9:226-236.
    [146]Quinlan J R. C4.5:Programs for machine learning [M]. San Mateo, CA:Morgan Kauf-mann,1993.
    [147]Yuan Y. Shaw M J. Induction of fuzzy decision trees [J]. Fuzzy Sets and Systems,1995. 69:125-139.
    [148]Cohen W W. Fast effective rule induction [C]//Proceedings 12th International Con-ference on Machine Learning. Tahoe City. CA. Morgan Kaufmann. San Mateo. CA. 1995.
    [149]Castro J L. Flores-Hidalgo L D, Mantas C J, Puche J M. Extraction of fuzzy rules from support vector machines [J]. Fuzzy Sets and Systems,2007,158(18):2057-2077.
    [150]Wang X Z, Dong C R. Improving generalization of fuzzy if-then rules by maximizing fuzzy entropy [J]. IEEE Transactions on Fuzzy Systems,2009,17(3):556-567.
    [151]Wang X Z, Zhai J H. Lu S X. Induction of multiple fuzzy decision trees based on rough set technique [J]. Information Sciences,2008,178:3188-3202.
    [152]Wang X Z, Tsang E C C, Zhao S Y, Chen D G, Yeung D. Learning fuzzy rules from fuzzy examples based on rough set techniques [J]. Information Sciences,2007,177:4493-4514.
    [153]Zolghadri M J, Mansoori E G. Weighting fuzzy classification rules using receiver oper-ating characteristics (ROC) analysis [J]. Information Sciences,2007,177:2296-2307.
    [154]Casillas J, Cordon O, Herrera F, Magdalena L. Interpretability issues in fuzzy modeling [M]. Berlin:Springer,2003.
    [155]Zadeh L A. Outline of a new approach to the analysis of complex systems and decision processes [J]. IEEE Transanctions on Systems, Man, Cybernetics,1973,3:28-44.
    [156]Hullermeier E. Fuzzy methods in machine learning and data mining:status and prospects [J]. Fuzzy Sets and Systems,2005,156(3):387-406.
    [157]Pach F P, Gyenesei A, Abonyi J. Compact fuzzy association rule-based classifier [J]. Expert Systems with Applications,2008,34:2406-2416.
    [158]Mohammadreza A O. Support vector machine-based classification scheme for myoelec-tric control applied to upper limb [J]. IEEE Transaction On Biomedical Engineering, 2008,55(8):1956-1965.
    [159]Hsu C W, Chang C C, Lin C J. Technical Report, A practical guide to support vector classification [R]. Department of Computer Science & Information Engineering, National Taiwan University, Taiwan,2003.
    [160]Platt J C, Cristianini N, Shawe-Taylor J. Large margin DAG's for multiclass classifi-cation [J]. Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA,2000,12:547-553.
    [161]Liu X D, Liu W Q. The framework of axiomatics fuzzy sets based fuzzy classifiers [J]. Journal of Industrial Management Optimization,2008,4(3):581-609.
    [162]Aha D, Kibler D. Instance-based learning algorithms [J]. Machine Learning,1991, 6(1):37-66.
    [163]Witten I H, Frank E. Data Mining:Practical Machine Learning Tools and Techniques [M]. San Mateo, CA:Morgan Kaufmann,2005.
    [164]Yuan Y, Shaw M J. Induction of fuzzy decision trees [J]. Fuzzy Sets and Systems,1995, 69:125-139.
    [165]Berlanga F J, Rivera A J, del Jesus M J, Herrera F. GP-COACH:Genetic programming-based learning of compact and accurate fuzzy rule-based classification systems for High- dimensional [J]. Information Sciences,2010,180:1183-1200.
    [166]Sanz J A. Fernandez A, Bustince H, Herrera F. Improving the performance of fuzzy rule-based classification systems with interval-valued fuzzy sets and genetic amplitude tuning [J]. Information Sciences,2010,180:3674-3685.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700