用户名: 密码: 验证码:
Word Embedding Composition for Data Imbalances in Sentiment and Emotion Classification
详细信息    查看全文
  • 作者:Ruifeng Xu (1)
    Tao Chen (1)
    Yunqing Xia (2)
    Qin Lu (3)
    Bin Liu (1)
    Xuan Wang (1)

    1. Shenzhen Engineering Laboratory of Digital Stage Performance Robot
    ; Harbin Institute of Technology Shenzhen Graduate School ; Shenzhen ; Guangdong ; China
    2. Research Institute of Information Technology
    ; Tsinghua University ; Beijing ; China
    3. Department of Computing
    ; The Hong Kong Polytechnic University ; Kowloon ; Hong Kong
  • 关键词:Sentiment analysis ; Emotion classification ; Imbalanced training ; Word embedding ; Semantic compositionality
  • 刊名:Cognitive Computation
  • 出版年:2015
  • 出版时间:April 2015
  • 年:2015
  • 卷:7
  • 期:2
  • 页码:226-240
  • 全文大小:871 KB
  • 参考文献:1. Allan, K (1986) Linguistic meaning. Routledge & Kegan Paul, London & New York
    2. Barua, S, Islam, M, Yao, X, Murase, K (2014) MWMOTE鈥攎ajority weighted minority oversampling technique for imbalanced data set learning. IEEE Trans Knowl Data Eng 26: pp. 405-425 CrossRef
    3. Bengio Y. Neural net language models. Scholarpedia. 2008;3(1):3881.
    4. Bengio, Y, Ducharme, R, Vincent, P (2003) A neural probabilistic language model. J Mach Learn Res 3: pp. 1137-1155
    5. Bengio Y, Schwenk H, Sen茅cal JS, Morin F, Gauvain JL. Neural probabilistic language models. In: Innovations in machine learning. Berlin: Springer; 2006. p. 137鈥?6.
    6. Blunsom P, Grefenstette E, Kalchbrenner N, et al. A convolutional neural network for modelling sentences. In: Proceedings of ACL. 2014.
    7. Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C. Safe-level-smote: safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In: Advances in knowledge discovery and data mining. Berlin: Springer; 2009. p. 475鈥?2.
    8. Bunkhumpornpat, C, Sinapiromsaran, K, Lursinsap, C (2012) DBSMOTE: density-based synthetic minority over-sampling technique. Appl Intell 36: pp. 664-684 CrossRef
    9. Cai, Q, He, H, Man, H (2014) Imbalanced evolving self-organizing learning. Neurocomputing 133: pp. 258-270 CrossRef
    10. Cambria E, Hussain A, Havasi C, Eckl C. Common sense computing: from the society of mind to digital intuition and beyond. In: Biometric ID management and multimodal communication. Berlin: Springer; 2009. p. 252鈥?9.
    11. Cambria E, Hussain A, Havasi C, Eckl C. Sentic computing: exploitation of common sense for the development of emotion-sensitive systems. In: Development of multimodal interfaces: active listening and synchrony. Berlin: Springer; 2010. p. 148鈥?6.
    12. Cambria E, Hussain A, Havasi C, Eckl C. SenticSpace: visualizing opinions and sentiments in a multi-dimensional vector space. In: Knowledge-based and intelligent information and engineering systems. Berlin: Springer; 2010. p. 385鈥?3.
    13. Cambria E, Olsher D, Rajagopal D. SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis. AAAI. 2014;1515鈥?1.
    14. Cambria, E, Schuller, B, Xia, Y, Havasi, C (2013) New avenues in opinion mining and sentiment analysis. IEEE Intell Syst 28: pp. 15-21 CrossRef
    15. Chawla, NV, Bowyer, KW, Hall, LO, Kegelmeyer, WP (2002) Smote: synthetic minority over-sampling technique. J Artif Intell Res 16: pp. 321-357
    16. Chawla, NV, Cieslak, DA, Hall, LO, Joshi, A (2008) Automatically countering imbalance and its empirical relationship to cost. Data Mining Knowl Discov 17: pp. 225-252 CrossRef
    17. Chawla, NV, Japkowicz, N, Kotcz, A (2004) Editorial: special issue on learning from imbalanced data sets. ACM SIGKDD Explor Newsl 6: pp. 1-6 CrossRef
    18. Chen T, Xu R, Lu Q, Liu B, Xu J, Yao L, He Z. A sentence vector based over-sampling method for imbalanced emotion classification. In: Computational linguistics and intelligent text processing. Berlin: Springer; 2014. p. 62鈥?2.
    19. Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of ICML. ACM; 2008. p. 160鈥?.
    20. Das, D, Bandyopadhyay, S (2012) Sentence-level emotion and valence tagging. Cogn Comput 4: pp. 420-435 CrossRef
    21. Ester M, Kriegel HP, Sander J, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of KDD. 1996. p. 226鈥?1.
    22. Grassi, M, Cambria, E, Hussain, A, Piazza, F (2011) Sentic web: a new paradigm for managing social media affective information. Cogn Comput 3: pp. 480-489 CrossRef
    23. Hall, M, Frank, E, Holmes, G, Pfahringer, B, Reutemann, P, Witten, IH (2009) The weka data mining software: an update. ACM SIGKDD Explor Newsl 11: pp. 10-18 CrossRef
    24. Han H, Wang WY, Mao BH. Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: Advances in intelligent computing. Berlin: Springer; 2005. p. 878鈥?7.
    25. He H, Bai Y, Garcia EA, Li S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In: Proceedings of IJCNN. IEEE; 2008. p. 1322鈥?.
    26. Hinton GE. Learning distributed representations of concepts. In: Proceedings of CogSci, vol 1. Amherst, MA; 1986. p. 12.
    27. Jo, T, Japkowicz, N (2004) Class imbalances versus small disjuncts. ACM SIGKDD Explor Newsl 6: pp. 40-49 CrossRef
    28. Levy R, Manning C. Is it harder to parse chinese, or the chinese treebank?. In: Proceedings of ACL, vol 1. ACL; 2003.p. 439鈥?6.
    29. L贸pez, V, Fern谩ndez, A, Garc铆a, S, Palade, V, Herrera, F (2013) An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci 250: pp. 113-141 CrossRef
    30. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
    31. Mikolov T, Karafi谩t M, Burget L, Cernock峄?J, Khudanpur S. Recurrent neural network based language model. In: Proceedings of INTERSPEECH. 2010. p. 1045鈥?.
    32. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111鈥?.
    33. Mnih A, Hinton GE. A scalable hierarchical distributed language model. In: Advances in neural information processing systems. 2009. p. 1081鈥?.
    34. Pang B, Lee L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics; 2005. p. 115鈥?4.
    35. Pang B, Lee L, Vaithyanathan S. Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of EMNLP, vol 10. ACL; 2002. p. 79鈥?6.
    36. Pelletier, FJ (1994) The principle of semantic compositionality. Topoi 13: pp. 11-24 CrossRef
    37. P艡ibil J, P艡ibilov谩 A. GMM-based evaluation of emotional style transformation in czech and slovak. Cogn Comput. 2014;6(4):928鈥?39.
    38. Shaoul, C (2010) The Westbury lab wikipedia corpus. University of Alberta, Edmonton
    39. Socher R, Perelygin A, Wu JY, Chuang J, Manning CD, Ng AY, Potts C. Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of EMNLP. CiteSeer; 2013. p. 1631鈥?2.
    40. Sun, R (2013) Moral judgment, human motivation, and neural networks. Cogn Comput 5: pp. 566-579 CrossRef
    41. Sun, Y, Kamel, MS, Wong, AK, Wang, Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recogn 40: pp. 3358-3378 CrossRef
    42. Tang, Y, Zhang, YQ, Chawla, NV, Krasser, S (2009) SVMs modeling for highly imbalanced classification. IEEE Trans Syst Man Cybern Part B Cybern 39: pp. 281-288 CrossRef
    43. Yang, Q, Wu, X (2006) 10 challenging problems in data mining research. Int J Inf Technol Decis Mak 5: pp. 597-604 CrossRef
  • 刊物主题:Neurosciences; Computation by Abstract Devices; Artificial Intelligence (incl. Robotics); Computational Biology/Bioinformatics;
  • 出版者:Springer US
  • ISSN:1866-9964
文摘
Text classification often faces the problem of imbalanced training data. This is true in sentiment analysis and particularly prominent in emotion classification where multiple emotion categories are very likely to produce naturally skewed training data. Different sampling methods have been proposed to improve classification performance by reducing the imbalance ratio between training classes. However, data sparseness and the small disjunct problem remain obstacles in generating new samples for minority classes when the data are skewed and limited. Methods to produce meaningful samples for smaller classes rather than simple duplication are essential in overcoming this problem. In this paper, we present an oversampling method based on word embedding compositionality which produces meaningful balanced training data. We first use a large corpus to train a continuous skip-gram model to form a word embedding model maintaining the syntactic and semantic integrity of the word features. Then, a compositional algorithm based on recursive neural tensor networks is used to construct sentence vectors based on the word embedding model. Finally, we use the SMOTE algorithm as an oversampling method to generate samples for the minority classes and produce a fully balanced training set. Evaluation results on two quite different tasks show that the feature composition method and the oversampling method are both important in obtaining improved classification results. Our method effectively addresses the data imbalance issue and consequently achieves improved results for both sentiment and emotion classification.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700