手语计算30年:回顾与展望
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Thirty Years Beyond Sign Language Computing:Retrospect and Prospect
  • 作者:姚登峰 ; 江铭虎 ; 鲍泓 ; 李晗静 ; 阿布都克力木·阿布力孜
  • 英文作者:YAO Deng-Feng;JIANG Ming-Hu;BAO Hong;LI Han-Jing;ABUDOUKELIMU Abulizi;Beijing Key Laboratory of Information Service Engineering,Beijing Union University;Laboratory of Computational Linguistics,School of Humanities,Center for Psychology and Cognitive Science,Tsinghua University;
  • 关键词:手语计算 ; 分类词谓语 ; 机器翻译 ; 空间建模 ; 多信道 ; 空间隐喻
  • 英文关键词:sign language computing;;classifier predicates;;machine translation;;spatial modeling;;multi-channel;;spatial metaphor
  • 中文刊名:JSJX
  • 英文刊名:Chinese Journal of Computers
  • 机构:北京市信息服务工程重点实验室北京联合大学;清华大学人文学院计算语言学实验室心理学与认知科学研究中心;
  • 出版日期:2019-01-15
  • 出版单位:计算机学报
  • 年:2019
  • 期:v.42;No.433
  • 基金:国家自然科学基金重点项目(61433015);; 国家社会科学基金重大项目(14ZDB154);; 教育部人文社会科学研究青年基金(14YJC740104);; 国家语委重点项目(ZDI135-31);; 北京市属高校高水平教师队伍建设创新团队建设提升计划(IDHT20170511);; 北京市教委科技计划项目(KM201711417006);; 清华大学自主科研项目两岸清华大学专项(20161080056);; 北京联合大学人才强校优选计划资助~~
  • 语种:中文;
  • 页:JSJX201901009
  • 页数:25
  • CN:01
  • ISSN:11-1826/TP
  • 分类号:113-137
摘要
手语的自然语言处理是计算机学科中的一项重要任务.目前随着信息技术的飞速发展,以文本和语音为主要载体的传统语言计算的工作重点已从编码、输入方法和字音的研究逐渐转移到语法层面,并进入深度计算的阶段.然而手语信息处理却严重滞后,处于空白起步阶段.究其原因,主要是缺乏用于机器学习的具有一定规模的手语语料库资源,同时传统的语言计算技术也存在不足,这些都阻碍了手语机器翻译、手语问答系统、手语信息检索等信息处理的应用研究.该文首先阐述了手语计算与传统语言计算的本质差异在于空间建模,这种差异导致了前者核心任务是单信道与多信道转换,后者根本任务是消歧.从词法、句法、语义、语用、应用等层面对手语计算进行了回顾,重点介绍了手语机器翻译和分类词谓语计算,指出分类词谓语是手语计算的关键以及取得突破的切入点.从展望的角度,认为互联网时代体感设备的出现、认知神经科学的兴起、深度学习的进展等新技术为手语计算带来了新的机遇.将手语计算与传统语言计算进行比较,分析了手语计算的趋势和未来的研究方向,手语的认知计算是从手势的物理特征到语义表征的映射转换过程,其计算趋势是填补音韵特征、语义单元这样的中间步骤,避免直接从底层特征得到语义概念,关注在手语行为与语言特征的关系上进行机器学习,建立融合空间特征的统计学习模型.未来研究方向包括资源建设、文景转换、隐喻理解,其中文景转换有助于实现空间信息抽取,即物体的空间方向、位置等信息,结合知识库消除自然语言的模糊性,进而实现三维场景构建.指出手语计算正从萌芽期过渡到发展期,若取得重大突破,手语计算将扩展语言计算体系,推动人工智能的发展.
        The natural language processing of sign language is an important task in the field of artificial intelligence and information processing.Currently,with the development of information technology,the focus on the information processing of spoken language and written language,is gradually shifting from the word coding and input method to the grammatical level,and then to depth computing.However,sign language information processing is seriously lagging behind and remains at the starting stage.The main reason for this situation is that no ready-made signlanguage corpus resources can be used for machine learning and deep learning.Sign language machine translation,sign language question-answering system,information retrieval and information processing cannot be applied because of the lack of research foundation.The essential difference between sign language computing and traditional language computing is spatial modeling and it leads to that the core task of sign language computing is to convert single-channel representation to multi-channel representation,while the fundamental task of the traditional language computing is the disambiguation of single-channel representation.From the lexical,syntactic,semantic,pragmatic,and applied levels,sign language computing is reviewed,and the sign language machine translation and classifier predicates in computing are emphatically introduced.Classifier predicates are the key of sign language computing,and it is the breakthrough point of sign language computing.New technologies,such as the emergence of somatosensory devices,the rise of cognitive neuroscience and the progress of deep learning,have brought new opportunities to sign language computing in the Internet age.From the perspective of outlook,sign language computing is compared with spoken language computing.The trend of sign language computing and the breakthrough points are analyzed.The cognitive computing of sign language has been regarded as a mapping conversion process from the physical characteristics of gestures to semantic representation.The trend of sign language computing is to fill these intermediate steps,such as phonological features and semantic units.It avoids the semantic concepts obtained directly from the underlying physical features,focuses on the machine learning on the relationship between sign language behavior and language features,and establishes the statistical learning model of fusion spatial features.These breakthrough points include resource construction,text-to-scene and metaphor understanding.Among them,the text-to-scene in the sign language is helpful to realize the spatial information extraction including spatial orientation,object position,and the ambiguity of natural language can be eliminated by combining with the knowledge base,so as the threedimensional scene construction can be achieved and creates a breakthrough in understanding the spatial relationship and generating the virtual scene.It is pointed out that the sign language computing is from the embryonic period to the development period.Driven by the interdisciplinary,sign language computing may make a substantial breakthrough.The astonishing progress of traditional language computing has promoted artificial intelligence and human-computer interaction to develop further.If a series of problems about sign language computing can be solved in fields of theory,technology and engineering,it will greatly speed up the development of artificial intelligence and natural language processing.
引文
[1]Futrell R,Mahowald K,Gibson E.Large-scale evidence of dependency length minimization in 37languages.Proceedings of the National Academy of Sciences,2015,112(33):10336-10341
    [2]Mehrabian A.Linguistics:Silent messages.American Anthropologist,1973,75(6):1926-1927
    [3]Aronoff M,Rees-Miller J Eds.The Handbook of Linguistics(Vol.22).New Jersey,USA:John Wiley&Sons,2008
    [4]Sun Mao-Song.Language computing:A commanding point of strategy of medium and long-term development of information science.Journal of Language Application,2005,14(3):38-40(in Chinese)(孙茂松.语言计算:信息科学技术中长期发展的战略制高点.语言文字应用,2005,14(3):38-40)
    [5]Yu Shi-Wen,Zhu Xue-Feng,Geng Li-Bo.Natural language processing technology and language deep computing.Chinese Social Sciences,2015,36(3):127-135(in Chinese)(俞士汶,朱学锋,耿立波.自然语言处理技术与语言深度计算.中国社会科学,2015,36(3):127-135)
    [6]Huenerfauth M.American Sign language generation:Multimodal NLG with multiple linguistic channels//Proceedings of the Association for Computational Linguistics(ACL)Student Research Workshop.Michigan,USA,2005:37-42
    [7]Emmorey K,Corina D.Lexical recognition in sign language:Effects of phonetic structure and morphology.Perceptual and Motor Skills,1990,71(3f):1227-1252
    [8]Montemurro M A,Zanette D H.Entropic analysis of the role of words in literary texts.Advances in Complex Systems,2002,5(1):7-17
    [9]Huenerfauth M.Spatial representation of classifier predicates for machine translation into American Sign Language//Proceedings of the Workshop on Representation and Processing of Sign Language,4th International Conference on Language Resources and Evaluation.Lisbon,Portugal,2004:24-31
    [10]Liddell S K.16Blended spaces and deixis in sign language discourse//McNeill D ed.Language and Gesture.Cambridge,UK:Cambridge University Press,2000,2:331-357
    [11]Sutton-Spence R,Woll B.The Linguistics of British Sign Language:An Introduction.Cambridge,UK:Cambridge University Press,1999
    [12]Clayton V,Lucas C.Linguistics of American Sign Language:An Introduction.Washington,District of Cloumbia,USA:Gallaudet University Press,2000
    [13]Liu H-T.Deaf Students’Story Comprehension Using Manually Coded Chinese,Taiwanese Sign Language and Written Chinese[Ph.D.dissertation].National Changhua University of Education,Changhua,China,2004(in Chinese)(劉秀丹.啟聰學校學生文法手語、自然手語及書面語故事理解能力之研究[博士学位論文].彰化師範大學特殊教育研究所,彰化,中国,2004)
    [14]Karen E,Corina D,Bellugi U.Differential processing of topographic and referential functions of space//Emmorey K,Reilly J eds.Language,Gesture,and Space,Lawrence Erlbaum Associates:Hillsdale,USA,1995:43-62
    [15]Hickok G,Say K A,Bellugi U,Klima E S.The basis of hemispheric asymmetries for language and spatial cognition:Clues from focal brain damage in two deaf native signers.Aphasiology,1996,10(6):577-591
    [16]Diane L-M.Where are all the modality effects?//Meier R,Cormier K,Quinto-Pozos D eds.Modality and Structure in Signed and Spoken Languages.Cambridge,UK:Cambridge University Press,2002:241-262
    [17]Stokoe W C.Sign language structure:An outline of the visual communication systems of the American deaf.Studies in Linguistics:Occasional Papers,1960,8:3-37
    [18]Vogler C,Metaxas D.Parallel hidden markov models for American Sign Language recognition//Proceedings of the IEEE International Conference on Computer Vision.Corfu,Greece,1999,1:116-122
    [19]Jiang Feng,Gao Wen,Yao Hong-Xun,Chen Xi-Lin.LMAapproach in person-independent sign language recognition.Chinese Journal of Computers,2007,30(5):5851-5860(in Chinese)(姜峰,高文,姚鸿勋,陈熙霖.手势手语力效分析.计算机学报,2007,30(5):5851-5860)
    [20]Lichtenauer J,Hendriks E,Reinders M.Learning to recognize a sign from a single example//Proceedings of the 2008 8th IEEE International Conference on Automatic Face&Gesture Recognition(FG’08).Amsterdam,Netherlands,2008:1-6
    [21]Bowden R,Windridge D,Kadir T,et al.A linguistic feature vector for the visual interpretation of sign language//Proceedings of the 8th European Conference on Computer Vision.Prague,Czech Republic,2004:390-401
    [22]Kadir T,Bowden R,Ong E J,et al.Minimal training,large lexicon,unconstrained sign language recognition//Proceedings of the the British Machine Vision Conference.London,UK,2004:1-10
    [23]Cooper H,Bowden R.Large lexicon detection of sign language//Human-Computer Interaction.Berlin,Germany:Springer,2007:88-97
    [24]Sandler W.Phonological representation of the sign:Linearity and nonlinearity in American Sign Language.Walter de Gruyter,1989
    [25]Baus C,Gutiérrez-Sigut E,Quer J,Carreiras M.Lexical access in Catalan Signed Language(LSC)production.Cognition,2008,108(3):856-865
    [26]Ong S,Ranganath S.Automatic sign language analysis:Asurvey and the future beyond lexical meaning.IEEE Transactions on PAMI,2005,27(6):873-891
    [27]Wu Y,Huang T.Vision-based gesture recognition:Areview,gesture-based communication in human-computer interaction.LNCS,1999,1739:103-115
    [28]Mitra S,Acharya T.Gesture recognition:A survey.IEEETransactions on SMC-C,2007,37(3):311-324
    [29]Yao Deng-Feng,Jiang Ming-Hu,Abudoukelimu A,et al.Asurvey of Chinese sign language processing.Journal of Chinese Information Processing,2015,29(5):216-228(in Chinese)(姚登峰,江铭虎,阿布都克力木·阿布力孜等.中国手语信息处理述评.中文信息学报,2015,29(5):216-228)
    [30]Stokoe W C,Casterline D C,Croneberg C G.A Dictionary of American Sign Language on Linguistic Principles.Silver Spring,USA:Linstok,1965
    [31]Prillwitz S,Hamburg Zentrumfür Deutsche Gebrdensprache und Kommunikation Geh9rloser.HamNoSys:Version 2.0;Hamburg Notation System for Sign Languages;an Introductory Guide.Willkommen,Germany:Signum-Verlag,1989
    [32]Liddell S K,Johnson R E.American Sign Language compound formation processes,lexicalization,and phonological remnants.Natural Language&Linguistic Theory,1986,4(4):445-513
    [33]Vogler C,Metaxas D.Toward scalability in ASL recognition:Breaking down signs into phonemes//Proceedings of the Gesture-Based Communication in Human-Computer Interaction.International Gesture Workshop,Gif-sur-Yvette,France,1999:211-224
    [34]Awad G,Han J,Sutherland A.Novel boosting framework for subunit-based sign language recognition//Proceedings of the 2009the 16th IEEE International Conference on Image Processing.Cairo,Egypt,2009:2729-2732
    [35]Brentari D.A Prosodic Model of Sign Language Phonology.Cambridge,Massachusetts,USA:MIT Press,1998
    [36]Sandler W.The Syllable in Sign Language:Considering the Other Natural Language Modality.The Syllable in Speech Production.New York,USA:Lawrence Erlbaum Associates,2008:379-408
    [37]Wilbur R B.Why syllables?What the notion means for ASLresearch//Fischer S D,Siple P eds.Theoretical Issues in Sign Language Research.Chicago,USA:University of Chicago Press,1990,1:81-108
    [38]Kong W W,Ranganath S.Automatic hand trajectory segmentation and phoneme transcription for sign language//Proceedings of the 2008 8th IEEE International Conference on Automatic Face&Gesture Recognition(FG’08).Amsterdam,Netherlands,2008:1-6
    [39]Han J,Awad G,Sutherland A.Modelling and segmenting subunits for sign language recognition based on hand motion analysis.Pattern Recognition Letters,2009,30(6):623-633
    [40]Supalla T.The classifier system in American Sign Language//Craig C ed.Noun Classes and Categorization.Amsterdam,Netherlands:John Benjamins,1986:181-214
    [41]Newport E L.Task specificity in language learning?Evidence from speech perception and ASL//Wanner E,Gleitman L R eds.Language Acquisition:The State of the Art.Cambridge,UK:Cambridge University Press,1982:450-486
    [42]Newport E L,Bellugi U.Linguistic expression of category levels in a visual-gestural language:A flower is a flower is a flower//Rosch E,Lloyd B B eds.Cognition and Categorization.Hillsdale,USA:Lawrence Erlbaum Associates,1978:49-77
    [43]McDonald B.Levels of analysis in sign language research//Kyle J G,Woll B eds.Language in Sign:An International Perspective on Sign Language.London,UK:Croom Helm,1983:32-40
    [44]Fang G,Gao W,Zhao D.Large-vocabulary continuous sign language recognition based on transition-movement models.IEEE Transactions on Systems,Man and Cybernetics,Part A:Systems and Humans,2007,37(1):1-9
    [45]Wang C,Gao W,Shan S.An approach based on phonemes to large vocabulary Chinese Sign Language recognition//Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition.Washington,USA,2002:411-416
    [46]Vogler C,Metaxas D.Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods//Proceedings of the 1997 IEEE International Conference on Systems,Man,and Cybernetics.Orlando,USA,1997,1:156-161
    [47]Dreuw P,Ney H.Towards automatic sign language annotation for the ELAN tool//Proceedings of the International Conference on Language Resources and Evaluation(LREC Workshop):Representation and Processing of Sign Languages,European Language Resources Association.Marrakech,Morocco,2008:1-10
    [48]Efthimiou E,Fotinea S E.GSLC:Creation and annotation of a Greek Sign Language corpus for HCI//Proceedings of the4th International Conference on Universal Access in HumanComputer Interaction.Beijing,China,2007:657-666
    [49]Zahedi M,Dreuw P,Rybach D,et al.Continuous sign language recognition-approaches from speech recognition and available data resources//Proceedings of the 2nd Workshop on the Representation and Processing of Sign Languages:Lexicographic Matters and Didactic Scenarios.Paris,France,2006:21-24
    [50]Braffort A,Choisier A,Collet C,et al.Toward an annotation software for video of sign language,including image processing tools and signing space modelling//Proceedings of the fourth International Conference on Language Resources and Evaluation(LREC).Lisbon,Portugal,2004:201-204
    [51]Crasborn O,Mesch J,Waters D,et al.Sharing sign language data online:Experiences from the ECHO project.International Journal of Corpus Linguistics,2007,12(4):535-562
    [52]Gonzalez M,Collet C,Dubot R.Head tracking and hand segmentation during hand over face occlusion in sign language//Proceedings of the 11th European Conference on Computer Vision(ECCV 2010 Workshops).Heraklion,Greece,2010,6553:234-243
    [53]Yao D,Jiang M,Huang Y,et al.Study of sign segmentation in the text of Chinese Sign Language.Universal Access in the Information Society,2017,16(3):725-737
    [54]Meier R P.Person deixis in American Sign Language.Theoretical Issues in Sign Language Research,1990,1:175-190
    [55]Cormier K A.Grammaticization of Indexic Signs:How American Sign Language Expresses Numerosity[Ph.D.dissertation].University of Texas Austin,Austin,USA,2002
    [56]Toro J.Automated 3Danimation system to inflect agreement verbs//Proceedings of the 6th Annual High Desert Linguistics Society Conference.Milton Keynes,UK,2004
    [57]Toro J A.Automatic Verb Agreement in Computer Synthesized Depictions of American Sign Language[Ph.D.dissertation].DePaul University,Chicago,USA,2005
    [58]Segouat J,Braffort A.Toward the study of sign language coarticulation:Methodology proposal//Proceedings of the2009 2nd International Conferences on Advances in Computer-Human Interactions.Cancun,Mexico,2009:369-374
    [59]Huenerfauth M,Lu P.Modeling and synthesizing spatially inflected verbs for American Sign Language animations//Proceedings of the 12th International ACM SIGACCESSConference on Computers and Accessibility.Orlando,USA,2010:99-106
    [60]Duarte K,Gibet S.Presentation of the SignCom project//Proceedings of the First International Workshop on Sign Language Translation and Avatar Technology.Berlin,Germany,2011:10-11
    [61]Lu P,Huenerfauth M.Learning a vector-based model of American Sign Language inflecting verbs from motion-capture data//Proceedings of the 3rd Workshop on Speech and Language Processing for Assistive Technologies.Association for Computational Linguistics.Montreal,Canada,2012:66-74
    [62]Kegl J,MacLaughlin D,Bahan B,et al.The Syntax of American Sign Language:Functional Categories and Hierarchical Structure.Cambridge,USA:MIT Press,2000
    [63]Bird S,Liberman M.A formal framework for linguistic annotation.Speech Communication,2001,33(1):23-60
    [64]Huenerfauth M.Representing coordination and non-coordination in American Sign Language animations.Behaviour&Information Technology,2006,25(4):285-295
    [65]Martell C H.FORM:An extensible,kinematically-based gesture annotation scheme//Proceedings of the 7th International Conference on Spoken Language Processing.Denver,USA,2002:353-356
    [66]Tucci M,Vitiello G,Costagliola G.Parsing nonlinear languages.IEEE Transactions on Software Engineering,1994,20(9):720-739
    [67]Cox S,Lincoln M,Tryggvason J,et al.Tessa,a system to aid communication with deaf people//Proceedings of the 5th International ACM Conference on Assistive Technologies.Edinburgh,Scotland,2002:205-212
    [68]Loeding B L,Sarkar S,Parashar A,et al.Progress in automated computer recognition of sign language//Proceedings of the 9th International Conference on Computers Helping People with Special Needs(ICCHP 2004).Paris,France,2004,3118:1079-1087
    [69]Lu P,Huenerfauth M.Collecting a motion-capture corpus of American Sign Language for data-driven generation research//Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies.Association for Computational Linguistics.Los Angeles,USA,2010:89-97
    [70]Yao Deng-Feng,Jiang Ming-Hu,Chang Jung-Hsing,Abudoukelimu A.Cognitive-semantic analysis of classifier predicates in Chinese Sign Language//Proceedings of the18th Chinese Lexical Semantics Workshop(CLSW2017).Leshan,China,2017:1-10
    [71]MacFarlane J,Morford J P.Frequency characteristics of American Sign Language.Sign Language Studies,2003,3(2):213-225
    [72]Cogill-Koez D.Signed language classifier predicates:Linguistic structures or schematic visual representation?Sign Language&Linguistics,2000,3(2):153-207
    [73]Liddell S K.Sources of meaning in ASL classifier predicates//Emmorey K ed.Proceedings of Workshop on Classifier Constructions-Perspectives on Classifier Constructions in Sign Languages.San Diego,USA,2003:199-220
    [74]Bangham J A,Cox S J,Elliott R,et al.Virtual signing:Capture,animation,storage and transmission-An overview of the ViSiCAST project//Proceedings of the Speech and Language Processing for Disabled and Elderly People.London,UK,2000:6/1-6/7
    [75]Huenerfauth M.A survey and critique of American Sign Language natural language generation and machine translation systems.Department of Computer and Information Science,University of Pennsylvania,Philadelphia:Technology Report:MS-CIS-03-32,2003
    [76]Liddell S K.Grammar,Gesture,and Meaning in American Sign Language.Cambridge,UK:Cambridge University Press,2003
    [77]Xu Lin,Gao Wen.Machine translation oriented understanding and synthesis of Chinese sign language.Chinese Journal of Computers,2000,23(1):60-65(in Chinese)(徐琳,高文.面向机器翻译的中国手语的理解与合成.计算机学报,2000,23(1):60-65)
    [78]Huenerfauth M.Representing American Sign Language classifier predicates using spatially parameterized planning templates//Banich M T,Caccamise D eds.Generalization of Knowledge:Multidisciplinary Perspectives.New York,USA:Psychology Press,2010
    [79]Bindiganavale R,Schuler W,Allbeck J M,et al.Dynamically altering agent behaviors using natural language instructions//Proceedings of the 4th International Conference on Autonomous Agents.Barcelona,Spain,2000:293-300
    [80]Butterworth B.Aphasia and models of language production and perception//Blanken G,et al.eds.Linguistic Disorders and Pathologies-An International Handbook.Berlin,Germany:Walter de Gruyter,1993:238-250
    [81]Von Agris U,Zieren J,Canzler U,et al.Recent developments in visual sign language recognition.Universal Access in the Information Society,2008,6(4):323-362
    [82]Canzler U,Ersayar T.Manual and facial features combination for video-based sign language recognition//Proceedings of the International Association for Pattern Recognition(IAPR)Workshop on Machine Vision Applications.Nara,Japan,2002:318-321
    [83]Van Zijl L,Olivrin G.South African Sign Language assistive translation//Proceedings of the IASTED International Conference on Telehealth/Assistive Technologies.Baltimore,USA,2008:7-12
    [84]Yao Deng-Feng,Jiang Ming-Hu,Abudoukelimu A,et al.Semantic computing of spatial metaphor based on deaf persons’cognition cases.Journal of Chinese Information Processing,2015,29(5):39-49(in Chinese)(姚登峰,江铭虎,阿布都克力木·阿布力孜等.基于聋人案例的空间隐喻语义认知计算.中文信息学报,2015,29(5):39-49)
    [85]Lefebvre-Albaret F,Dalle P.Body posture estimation in sign language videos//Proceedings of the 8th International Gesture Workshop,Gesture in Embodied Communication and Human-Computer Interaction.Bielefeld,Germany,2009:289-300
    [86]Huenerfauth M,Lu P.Eliciting spatial reference for a motioncapture corpus of American Sign Language discourse//Proceedings of the 7th International Conference on Language Resources and Evaluation(LREC).Valleta,Malta,2010:121-124
    [87]Jiang Ming-Hu.Natural Language Processing.Beijing:Higher Education Press,2006(in Chinese)(江铭虎.自然语言处理.北京:高等教育出版社,2006)
    [88]Veale T,Conway A,Collins B.The challenges of cross-modal translation:English-to-Sign-Language translation in the Zardoz system.Machine Translation,1998,13(1):81-106
    [89]Dorr B J,Jordan P W,Benoit J W.A survey of current paradigms in machine translation.Advances in Computers,1999,49(1):1-68
    [90]Zhao L,Kipper K,Schuler W,et al.A machine translation system from English to American Sign Language//Proceedings of the 4th Conference of the Association for Machine Translation.Cuernavaca,Mexico.Lecture Notes in Computer Science,2000,1934:54-67
    [91]Speers D.Representation of American Sign Language for Machine Translation[Ph.D.dissertation].Georgetown University,Washington,USA,2001
    [92]Marshall I,Sfr.Extraction of semantic representations from syntactic CMU link grammar linkages//Proceedings of the Recent Advances in Natural Language Processing(RANLP).Tzigov Chark,Bulgaria,2001:154-159
    [93]Sfr,Marshall I.The architecture of an English-text-toSign-Languages translation system//Proceedings of the Recent Advances in Natural Language Processing(RANLP).Tzigov Chark,Bulgaria,2001:223-228
    [94]Sfr,Marshall I.Sign language translation via DRT and HPSG//Proceedings of the 3rd International Conference on Computational Linguistics and Intelligent Text Processing.Mexico City,Mexico.Lecture Notes in Computer Science,2002,5449:58-68
    [95]Elliott R,Glauert J R W,Kennaway J R,et al.Linguistic modelling and language-processing technologies for Avatarbased sign language presentation.Universal Access in the Information Society,2008,6(4):375-391
    [96]Traxler C B.The stanford achievement test:National norming and performance standards for deaf and hard-of-hearing students.Journal of Deaf Studies and Deaf Education,2000,5(4):337-348
    [97]Fotinea S E,Efthimiou E,Caridakis G,et al.A knowledgebased sign synthesis architecture.Universal Access in the Information Society,2008,6(4):405-418
    [98]Huenerfauth M.Generating American Sign Language Classifier Predicates for English-to-ASL Machine Translation[Ph.D.dissertation].University of Pennsylvania,Philadelphia,USA,2006
    [99]Ni Xun-Bo,Zhao De-Bin,Gao Wen,et al.Data generation and its validity inspection of signer-independent sign language.Journal of Software,2010,21(5):1153-1170(in Chinese)(倪训博,赵德斌,高文等.非特定人手语数据生成及其有效性检测.软件学报,2010,21(5):1153-1170)
    [100]Kennaway R.Synthetic animation of deaf signing gestures//Proceedings of the International Gesture Workshop on Gestureand Sign Language in Human-Computer Interaction.London,UK.Lecture Notes in Computer Science,2001,2298:146-157
    [101]Ye Ke-Jia,Yin Bao-Cai,Wang Li-Chun.CSLML:A markup language for expressive Chinese Sign Language synthesis.Computer Animation and Virtual Worlds,2009,20(2-3):237-245
    [102]Song Yi-Bo,Gao Wen,Yin Bao-Cai,et al.Text driven deaf-mute sign language synthesis system.Chinese Journal of Computers,1999,22(7):733-739(in Chinese)(宋益波,高文,尹宝才等.文本驱动的聋哑人手语合成系统.计算机学报,1999,22(7):733-739)
    [103]Wang Ru,Yin Bao-Cai,Wang Li-Chun,Kong De-Hui.Video semantic description method for Chinese sign language synthesis.Journal of Beijing University of Technology,2012,38(5):730-735(in Chinese)(王茹,尹宝才,王立春,孔德慧.面向中国手语合成的视频语义描述方法.北京工业大学学报,2012,38(5):730-735)
    [104]Huenerfauth M,Lu P.Effect of spatial reference and verb inflection on the usability of sign language animations.Universal Access in the Information Society,2012,11(2):169-184
    [105]Chen Yi-Qiang,Gao Wen,Liu Jun-Fa,Yang Chang-Shui.Multi-model behavior synchronizing prosody model in sign language synthesis.Chinese Journal of Computers,2006,29(5):822-827(in Chinese)(陈益强,高文,刘军发,杨长水.手语合成中的多模式行为协同韵律模型.计算机学报,2006,29(5):822-827)
    [106]He Wen-Jing,Chen Yi-Qiang,Liu Jun-Fa.Sign language gesture driven head movement synthesis.Journal of Frontiers of Computer Science and Technology,2012,6(12):1109-1115(in Chinese)(何文静,陈益强,刘军发.手势数据驱动的头部运动合成方法.计算机科学与探索,2012,6(12):1109-1115)
    [107]Krahmer E.What computational linguists can learn from psychologists(and vice versa).Computational Linguistics,2010,36(2):285-294
    [108]Neville H J,Bavelier D,Corina D,et al.Cerebral organization for language in deaf and hearing subjects:Biological constraints and effects of experience.Proceedings of the National Academy of Sciences,1998,95(3):922-929
    [109]Ngiam J,Khosla A,Kim M,et al.Multimodal deep learning//Proceedings of the 28th International Conference on Machine Learning.Washington,USA,2011
    [110]Gao Wen,Chen Xi-Lin,Ma Ji-Yong,Wang Zhao-Qi.Building language communication between deaf people and hearing society through multimodal human-computer interface.Chinese Journal of Computers,2000,23(12):1253-1260(in Chinese)(高文,陈熙霖,马继勇,王兆其.基于多模式接口技术的聋人与正常人交流系统.计算机学报,2000,23(12):1253-1260)
    [111]Feng Zhi-Wei.Computational linguistics:Its past and present.Journal of Foreign Languages,2011,34(1):9-17(in Chinese)(冯志伟.计算语言学的历史回顾与现状分析.外国语,2011,34(1):9-17)
    [112]Yao Deng-Feng,Jiang Ming-Hu,Abudoukelimu A,Li Han-Jing.Cognitive computing on Chinese Sign Language perception and comprehension//Proceedings of the 14th IEEEInternational Conference on Cognitive Informatics&Cognitive Computing(ICCI*CC).Beijing,China,2015:90-97
    [113]Friederici A D,Fiebach C J,Schlesewsky M,et al.Processing linguistic complexity and grammaticality in the left frontal cortex.Cerebral Cortex,2006,16(12):1709-1717
    [114]Lu R,Zhang S.Automatic Generation of Computer Animation:Using AI for Movie Animation.Germany:Springer-Verlag,2002
    [115]Coyne B,Sproat R.WordsEye:An automatic text-to-scene conversion system//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques.Los Angeles,USA,2001:487-496
    [116]Kelleher C,Pausch R.Lessons learned from designing a programming system to support middle school girls creating animated stories//Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing.Brighton,UK,2006:165-172
    [117]Allbeck J,Badler N.Representing and parameterizing agent behaviors//Life-Like Characters.Berlin,Germany:Springer,2004:19-38
    [118]Birke J,Sarkar A.A clustering approach for the nearly unsupervised recognition of nonliteral language//Proceedings of the 11th Conference of the the European Chapter of the Association for Computational Linguistics(EACL-06).Stroudsburg,USA,2006:329-336
    [119]Nissim M,Markert K.Syntactic features and word similarity for supervised metonymy resolution//Proceedings of the41st Annual Meeting of the Association for Computational Linguistics(ACL-03).Sapporo,Japan,2003:56-63
    (1)中国残疾人联合会.2010年末全国残疾人总数及各类、不同残疾等级人数.http://www.cdpf.org.cn/sytj/content/2012-06/26/content_30399867.htm,2016.4.26.
    (1)李德毅.大数据时代的认知计算,https://www.csdn.net/article/2013-11-13/2817475-MDCC-Big-Data-CognitiveCom-puting,2017.11.16.
    (1)Sign Writing.http://www.signwriting.org/,2016.4.23
    (1)实际上采集、转写、标注手语视频非常繁琐且任务困难,众多学者指出在众多语料标注中,唯有手语视频标注的RTF(Real-Time Factor)为100,意指1h的手语视频语料需要100 h做标注.因此标注人员也不可能花大量的时间来标注完整的语言学细节,包括句子类型、主手/辅手类型等.最常见的标注是标注人员根据手语视频直译的文本.限于时间限制,标注人员也不太可能为这些直译文本添加手势的边界以及复合手势标记.因此不管是手势识别成文本,还是人工翻译手势成文本,对手势的切分是绕不过去的问题.
    (1)VCom3D,Homepage.http://www.vcom3d.com/,2016.4.25
    (1)Leap Motion,Homepage.http://www.leapmotion.com/,2017.4.25
    (1)Gartner’s 2015 Hype Cycle for Emerging Technologies Identifies the Computing Innovations That Organizations Should Monitor.http://www.gartner.com/newsroom/id/3114217,2016.5.1

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700