基于GloVe模型的词向量改进方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Improved Word Representation Based on GloVe Model
  • 作者:陈珍锐 ; 丁治明
  • 英文作者:CHEN Zhen-Rui;DING Zhi-Ming;Faculty of Information Technology,Beijing University of Technology;
  • 关键词:词向量 ; Word2Vec ; GloVe ; 共现矩阵 ; 无关词
  • 英文关键词:word vector;;Word2Vec;;GloVe;;cooccurrence matrix;;unrelated words
  • 中文刊名:XTYY
  • 英文刊名:Computer Systems & Applications
  • 机构:北京工业大学信息学部;
  • 出版日期:2019-01-15
  • 出版单位:计算机系统应用
  • 年:2019
  • 期:v.28
  • 基金:国家重点研发计划(2017YFC0803300);; 北京市教委项目(KM201810005023,KM201810005024,KZ201610005009);; 国家自然科学基金(61402449,61703013,91546111,91646201);; 北京市科技计划项目(Z161100001116072)~~
  • 语种:中文;
  • 页:XTYY201901029
  • 页数:6
  • CN:01
  • ISSN:11-2854/TP
  • 分类号:196-201
摘要
使用词向量表示方法能够很好的捕捉词语的语法和语义信息,为了能够提高词向量语义信息表示的准确性,本文通过分析GloVe模型共现矩阵的特点,利用分布式假设,提出了一种基于GloVe词向量训练模型的改进方法.该方法主要通过对维基百科统计词频分析,总结出过滤共现矩阵中无关词和噪声词的一般规律,最后给出了词向量在词语类比数据集和词语相关性数据集的评估结果.实验表明,在相同的实验环境中,本文的方法能够有效的缩短词向量的训练时间,并且在词语语义类比实验中准确率得到提高.
        Word vector representation is a sound way to catch the grammatical and semantic information of words.In order to improve the accuracy of the semantic information of the word,this study proposes an improved training method model based on the GloVe by analyzing the characteristics of the co-occurrence matrix and using the distributed hypothesis.This method summarizes the general rules of irrelevant words and noise words in the co-occurrence matrix from analyzing the word frequency of Wikipedia statistics.Finally,we give the evaluation results of word vector in word analogy dataset and word correlation dataset.Experiments show that the method presented in this paper can effectively shorten the training time and the accuracy of the word semantic analogy experiment is improved in the same experimental environment.
引文
1 Manning CD,Raghavan P,Schiitze H. Introduction to Information Retrieval. New York, NY, USA:Cambridge University Press, 2008.
    2 Sebastiani F. Machine learning in automated text categorization. ACM Computing Surveys,2002, 34(1):1-47.[doi:10.1145/505282.505283]
    3 Tellex S, Katz B, Lin J, et al. Quantitative evaluation of passage retrieval algorithms for question answering.Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval. Toronto, Canada. 2003. 41-47.
    4 Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. arXiv:1301.378, 2013.
    5 Bengio Y, Ducharme R, Vincent P, et al. A neural probabilistic language model. The Journal of Machine Learning Research,2003, 3:1137-1155.
    6 Pennington J, Socher R, Manning CD. GloVe:Global vectors for word representation. Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. Doha,Qatar. 2014.
    7 Vilnis L, McCallum A. Word representations via gaussian embedding. arXiv:1412.6623, 2014.
    8 Huang EH, Socher R, Manning CD, et al. Improving word representations via global context and multiple word prototypes. Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics:Long Papers. Jeju Island, Korea. 2012. 873-882.
    9 Bojanowski P, Grave E, Joulin A, et al. Enriching word vectors with subword information. Transactions of the Association of Computational Linguistics, 2017, 5(1):135-146.
    10 Joulin A, Grave E, Bojanowski P, et al. Bag of tricks forefficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics:Volume 2, Short Papers 2017.2017.427-431.
    11 Joulin A, Grave E,Bojanowski P, et al. FastText.zip:Compressing text classification models. arXiv:1612.03651,2016.
    12 Ji SH, Yun H, Yanardag P, et al. WordRank:Learning word embeddings via robust ranking. Proceedings of 2016Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA. 2016. 658-668.
    13 Harris ZS. Distributional structure. WORD, 1954, 10(2-3):146-162.[doi:10.1080/00437956.1954.11659520]
    14 Firth JR. A Synopsis of Linguistic Theory 1930-1955. In:Studies in Linguistic Analysis. Oxford:The Philological Society, 1957:1-32.
    15 Mikolov T, Yih WT, Zweig G. Linguistic regularities in continuous space word representations. Proceedings of North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Atlanta, GA,USA. 2013. 746-751.
    16 Ciresan DC, Giusti A, Gambardella LM, et al. Deep neural networks segment neuronal membranes in electron microscopy images. Advances in Neural Information Processing Systems, 2015, 25:2852-2860.
    17 Hill F, Reichart R, Korhonen A. SimLex-999:Evaluating semantic models with(genuine)similarity estimation.Computational Linguistics, 2015, 41(4):665-695.
    18 Finkelstein L, Gabrilovich E, Matias Y, et al. Placing search in context:The concept revisited. ACM Transactions on Information Systems(TOIS),2002, 20(1):116-131.[doi:10.1145/503104.503110]
    19 Miller GA, Charles WG. Contextual correlates of semantic similarity. Language and Cognitive Processes, 1991, 6(1):1-28.[doi:10.1080/01690969108406936]
    20 Bruni E, Tran NK, Baroni M. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 2014,49(1):1-47.
    21 Rubenstein H, Goodenough JB. Contextual correlates of synonymy. Communications of the ACM, 1965,8(10):627-633.[doi:10.1145/365628.365657]
    22 Yang DQ, Powers DW. Verb similarity on the taxonomy of WordNet. Proceedings of the 3rd International WordNet Conference. Jeju Island, Korea. 2006.
    23 Luong MT,Socher R, Manning CD. Better word representations with recursive neural networks for morphology. Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria.2013. 104-113.
    24 Spearman C. The proof and measurement of association between two things. The American Journal of Psychology,1904, 15(1):72-101.[doi:10.2307/1412159]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700