大学英语四级考试中的口语测试与写作测试的相关性研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
语言教学通常定义为教授四种技能:听,说,读,写。在这四种技能中,说和写通常称之为产出性技能。既然说和写同属产出性技能,二者之间必定有许多共同点,并且彼此关系密切。这一点已经由各领域的众多研究者所证明。然而,关于口语测试和写作测试的相关性研究却很少可以看到,更不用说大学英语四级考试中写作部分和口语考试的相关了。
     为了填补这项研究中的空白,本研究以实证的方法对大学生的大学英语四、六级口语考试成绩与他们的四级写作部分考试成绩之间的相关性进行了初步探究。本研究中所涉及的研究问题如下:
     (1)大学生的英语口头表达能力和英语写作能力之间有没有显著的相关关系?如果有,相关度如何?
     (2)对具有不同口头表达能力或者写作能力的考生,其口语能力与写作能力的相关关系呈现怎样的格局?
     (3)观察这两种技能之间具有不同相关关系的考生在口头表达和写作中对语言特征的掌握上呈现怎样的状态和规律?
     实验中,1500名来自中国不同大学的学生参与了本研究。他们首先参加了大学英语四级考试执笔考试并全部以不低于550分的总成绩通过,之后他们又全部参加了四、六级口语考试。研究数据在收集完成之后,经过整理录入SPSS 11.5,SPSS主要对数据做描述性统计和Pearson相关分析。
     根据SPSS输出的实验结果,本研究得出以下结论:
     (1)中国大学生的英语口头表达能力和英语写作能力之间整体上存在正相关,但相关系数极低,只有0.051。
     (2)无论是对具有不同口头表达能力的考生还是对具有不同写作能力的考生,他们的英语口语能力与写作能力之间都不存在显著的相关关系。
     (3)通过对这两种技能之间具有不同相关关系的考生在口头表达和写作中对语言特征的掌握上的观察发现,口头表达能力强但写作能力中等甚至偏低的考生在口语考试中能产出较长的句子,口语流利,发音好,并且积极参与小组讨论,但在写作考试用的多是简单的词汇和句子结构,并且论述罗嗦,连贯性不好;而口头表达能力差但写作能力中等甚至偏高的考生,他们的口语不够流利,语法错误繁多,语音语调也不标准,但他们的写作结构清晰,有说服力,语法错误少,并且句子结构和选词比较灵活。
     因此,为了实现中国大学生英语的平衡发展,即,英语口语能力和写作能力同步提高,在今后的英语学习中,对于口头表达能力强但写作能力中等甚至偏低的考生,他们需要在写作中注意选词,尽量写信息量大的句子,还要有适当的连接词。同时,对于口头表达能力差但写作能力中等甚至偏高的考生,他们则需要在今后的英语学习中增加口语的练习。这样,大学生的口语能力和写作能力才能同步达到高水平。
As we generally acknowledge, the aims of language teaching course are very commonly defined in terms of four skills: listening, speaking, reading and writing. Among the four skills, speaking and writing are said to be productive skills, and people generally agree on this point. Since speaking and writing are grouped under productive skills, they must enjoy lots of similarities and have a close relationship with each other. Indeed, this point has been extensively proved by many researchers from various fields. Nevertheless, very few correlational studies between speaking test and writing test could be found, let alone the correlation between CET-SET and CET-4 Writing Sub-test.
     In order to fill the research gap, an empirical research was implemented to attempt to investigate whether or not there is a significant correlation between Chinese college students’CET-SET grades and their CET-4 Writing Sub-test grades. The research questions to be addressed in this study are listed as follows:
     (1) Is there an overall significant correlation between college students’English speaking ability and their English writing ability? If yes, to what extent they are correlated?
     (2) How are their speaking ability and writing ability correlated with each other for students of different English writing or speaking levels?
     (3) What notable characteristics can be found in their English speaking and writing for students with inconsistent speaking and writing levels?
     In the research experiment, 1,500 college students from different Chinese universities participated in the study. They first attended CET-4 and passed it with total scores no less than 550 points out of 710 points, and later they all attended CET-SET. After the procedure of data collection, the research data were formatted and then exported to SPSS 11.5. Descriptive analysis and Pearson’s correlation coefficient were employed to investigate the three research questions.
     Then the researcher analyzed the results of the experiment. The major findings of this study are shown as follows:
     (1) There is an overall positive correlation between Chinese college students’English speaking ability and their English writing ability, but the correlation coefficient is very low, only 0.051.
     (2) For students of different English speaking or writing levels, there is always no significant intragroup correlation between their English speaking ability and their English writing ability.
     (3) Through the analysis of their speaking and writing samples of students with inconsistent speaking and writing levels, it is found that students with high English speaking ability yet intermediate or even low English writing ability produce extended discourses, speak fluently with a good pronunciation and participate in the group discussions actively in SET whereas they use dominantly simple words and sentence structures as well as redundant and less coherent arguments in their writings; for students with low English speaking ability yet intermediate or even high English writing ability, their SET performance is characterized by influent utterances, plenty of grammatical errors as well as poor pronunciation and intonation while their writings are featured with the clear structure, convincing examples, few grammatical errors as well as flexible sentence structures and word choice.
     Thus, in order to develop Chinese college students’English in a balanced way, that is, to develop their English speaking ability and writing ability at the same time, in their English study later on, for the students with high English speaking ability yet intermediate or even low English writing ability, they need to work more on their word choice and try to write informative sentences with appropriate cohesive devices in their writings; meanwhile, for the students with low English speaking ability yet intermediate or even high English writing ability, what they need most in their English study later is more practice of their oral English. Only in this way, can college students’English speaking ability and writing ability both reach a high level.
引文
Bachman, L. F., Lynch, B. K. & Mason, M. (1995). Investigating variability in tasks and rater judgments in a performance test of foreign language speaking. Language Testing, 12: 238-257.
    Bachman, L. F. & Palmer, A. S. (1996). Language Testing in Practice. Oxford: Oxford University Press.
    Bonk, W. J. & Ockey, G. J. (2003). A many-facet Rasch analysis of the second language group oral discussion task. Language Testing, 20: 89-110.
    Brown, A. (1993). The role of test-taker feedback in the development process: test takers' reactions to a tape-mediated test of proficiency in spoken Japanese. Language Testing, 10: 277-304.
    Canale, M. & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1-47.
    Chafe, W. (1979). The flow of thought and the flow of language. In: Givon, Talmy (ed.) Discourse and syntax. New York: Academic Press, 159-181.
    Chafe, W. & Derborah, T. (1987). The Relation between Written and Spoken Language. Annual Review Anthropology, 16: 383-407.
    Chambers, F. & Richards, B. (1995). The“free”conversation and the assessment of oral proficiency. Language Learning Journal, 11: 6-10.
    Clark, J. L. D. & Li, Y. C. (1986). Development, validation, and dissemination of a proficiency based test of speaking ability in Chinese and an associated assessment model for other less commonly taught languages. Washington, DC: Center for Applied Linguistics.
    Cook, V. J. (1991), Second Language Learning and Language Teaching, London: Edward Arnold.
    East, Martin. (2007). Bilingual dictionaries in tests of L2 writing proficiency: do theymake a difference? Language Testing, 24: 331-53.
    Eckes, T. (2008). Rater types on writing performance assessments: A classification approach to rater variability. Language Testing, 25: 155-85.
    Egyud, G. & Glover, P. (2001). Oral testing in pairs- a secondary perspective. English Language Teaching Journal, 55: 70-76.
    Elder, C., Iwashita, N. & McNamara, T. (2002). Estimating the difficulty of oral proficiency: What does the test-taker have to offer?. Language Testing, 19: 347-368.
    Enright, M. K. & Quinlan, T. (2010). Complementing human judgment of essays written by English language learners with e-rater scoring. Language Testing, 27: 313-34.
    Ericsson P. F. & Haswell, R. (Eds.) (2006). Machine scoring of student essays: Truth and consequences. Logan, UT: Utah University Press.
    Fulcher, G. (1993). The Construct Validation of Rating Scales for Oral Tests in English as a Foreign Language. Unpublished PhD dissertation, Lancaster University, Lancaster.
    Fulcher, G. (1996). Testing tasks: Issues in task design and the group oral. Language Testing, 13: 23-51. Fulcher, G. (2003). Testing Second Language Speaking. London: Pearson Education.
    Gebril, A. (2009). Score generalizability of academic writing tasks: Does one test method fit it all? Language Testing, 26: 507-31.
    Gere, A. R. (1981). A Cultural Perspective on Talking and Writing. In B. M. Kroll & R. J. Vann, (Eds.), Exploring speaking-writing relationships: Connections and contrasts (pp. 55-81). Urbana, IL: National Council of Teachers of English.
    Halliday, M. A. K. (1990). Linguistic perspectives on literacy: A systemic-functional approach. In Christie, F. & Jenkins, E. (ed.) Literacy in Social Processes. Sydney: Literacy Technologies.
    Heaton, J. B. (2000). Writing English Language Tests. Beijing: Foreign Language Teaching and Research Press.
    Hoekje, B. & Linnell, K. (1994).“Authenticity”in language testing: Evaluatingspoken language tests for international teaching assistants. TESOL Quarterly, 28: 103-126.
    Hughes, A. (2000). Testing for Language Teachers. Beijing: Foreign Language Teaching and Research Press.
    Hughes, A. (2005). Teaching and Researching Speaking. Foreign Language Teaching and Research Press.
    Hyland, K. (2002). Teaching and Researching Writing. Harlow: Longman Education.
    Hymes, D. (1972). On communicative competence. In J. Pride & Holms (eds.), Sociolinguistics (pp. 269- 293). NY: Penguin.
    Johnson, J. S. & Lim, G. S. (2009). The influence of rater language background on writing performance assessment. Language Testing, 26: 485-505.
    Johnson, M. & A. (1998). Tyler. Re-analyzing the OPI: How much does it look like natural conversation? In R. Young & A. W. He (eds.). Talking and Testing: Discourse Approaches to the Assessment of Oral Proficiency, Vol. 14, Amsterdam: John Benjamins Publishing Company.
    Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26: 275-304.
    Kondo-Brown, K. (2002). A FACETS analysis of rater bias in measuring Japanese second language writing performance. Language Testing, 19: 3-31.
    Kormos, J. (1999). Simulating conversations in oral-proficiency assessment: A conversation analysis of role plays and non-scripted interview in language exams. Language Testing, 16: 163-188.
    Krashen, S. D. & Terrell, T. (1983). The Natural Approach: Language Acquisition in the Classroom. NJ: Alemany Press.
    Kroll, B. M. (1981). Developmental Relationship between Speaking and Writing. Illinois: National Council of teaching of English.
    Liberman, A. M. (1992). The relation of speech to reading and writing. In R. Frost and L. Katz, (Eds). Orthography, Phonology, Morphology, and Meaning. North Holland Publishers: Elsevier.
    Lumley, T. (2002). Assessment criteria in a large-scale writing test: what do theyreally mean to the raters? Language Testing, 19: 246-76.
    Luoma, S. (2004). Assessing Speaking. Cambridge: Cambridge University Press.
    Matsuno, S. (2009). Self-, peer-, and teacher-assessment in Japanese university EFL writing classroom. Language Testing, 26: 75-100.
    Moder, C. L. & Halleck, G. B. (1998). Framing the language proficiency interview as a speech even: Native and non-native speakers' question. In R. Young & He, A. W. (eds.). Talking and Testing: Discourse Approaches to the Assessment of Oral Proficiency, Vol. 14, Amsterdam: John Benjamins Publishing Company.
    National College English Testing Committee. (2006). College English Test-Spoken English Test (CET-SET) Syllabus and Sample Test. Shanghai: Shanghai Language Education Press.
    North, B. (2000). The Development of a Common Framework Scale of Language Proficiency. Theoretical Studies in Second Language Acquisition, Vol. 8, New York: Peter Lang Publishing.
    Nunan, D. (2000). Designing Tasks for the Communicative Classrooms. Beijing: Foreign Language Teaching and Research Press.
    O' Loughlin, K. (1995). Lexical density in candidate output on direct and semi-direct versions of an oral proficiency test. Language Testing, 12: 217-37.
    Phillips, E. M. (1992). The effects of language anxiety on students' oral test performance and attitudes. The Modern Language Journal, 76: 14-26.
    Rossier, J. (1976). Extroversion-introversion as a significant variable in the learning of oral English as a second language. Unpublished doctoral dissertation, University of Southern California, Los Angeles.
    Sasaki, M. & Hirose, K. (1999). Development of an analytic rating scale for Japanese L1 writing. Language Testing, 16: 457-78.
    Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing, 25: 465-93.
    Schoonen, R. (2005). Generalizability of writing scores: an application of structural equation modeling. Language Testing, 22: 1-30.
    Shi, L. (2001). Native- and nonnative-speaking EFL teachers’evaluation of Chinesestudents’English writing. Language Testing, 18: 303-25.
    Snellings, P., Van Gelderen, A. & De Glopper, K. (2004). Validating a test of second language written lexical retrieval: a new measure of fluency in written language production. Language Testing, 21: 174-201.
    Spolsky, B. (1995). Measured Words: The Development of Objective Language Testing. Oxford: Oxford University Press.
    Stansfield, C. & Kenyon, D. (1992). Research on the comparability of the oral proficiency interview and the simulated oral proficiency interview. System, 20: 347-364.
    Swain, M. (1985). Communicative competence: Some roles of comprehensible input in its development. In S. M. Gass & Madden, C. G. (eds.). Input in Second Language Acquisition. Rowley, MA: New bury House. pp.235-253.
    Swain, M. (1985). Large-scale communicative language testing: a case study. In Y. P. Lee, A. C. Y. Fok, R. Loard, & G. Low (eds.) New Directions in Language Testing. Oxford: Pergamon Press.
    Swain, M. (1993). The output hypothesis: Just speaking and writing aren’t enough. Canadian Modern Language Review, 50: 158-164.
    Tannen, D. (1982). Spoken and Written Language: Exploring Orality and Literacy. Norwood, N.J.: ABLEX Pub. Corp.
    Taylor, L. & Jones, N. (2001). Revising Instruments for Rating Speaking: Combining Qualitative and Quantitative Insights. Paper presented at the Language Testing Research Colloquium, St Louis.
    Underhill, N. (1987). Testing Spoken Languages: A Handbook of Oral Testing Techniques. Cambridge: Cambridge University Press.
    Upshur, J. & Turner, C.E. (1995). Constructing rating scales for second language tests. English Language Teaching Journal, 49: 3-12.
    Van Lier, L. (1989). Reeling, writhing, drawling, stretching, and fainting in coils: Oral proficiency interviews as conversation. TESOL Quarterly, 23: 489-508.
    Weigle, S. C. (2002). Assessing Writing. Cambridge: Cambridge University Press.
    Weigle, S. C. (2010). Validation of automated scores of TOEFL iBT tasks againstnon-test indicators of writing ability. Language Testing, 27: 335-53.
    Weir, C. J. (1990). Communicative Language Testing. Hertfordshire: Prentice Hall International (UK) Limited.
    Widdowson, H. G. (1978). Teaching Language as Communication. Oxford: Oxford University Press.
    Wigglesworth, G. (1993). Exploring bias analysis as a tool for improving rater consistency in assessing oral interaction. Language Testing, 10: 305-335.
    Wigglesworth, G. & Storch, N. (2009). Pair versus individual writing: Effects on fluency, complexity and accuracy. Language Testing, 26: 445-66.
    Zhu, X. (2007). What Do We Know about the Relationship between Speaking and Writing in College-level ESL Students? US-China Foreign Language, 42: 31-40.
    蔡基刚,“大学英语四、六级写作要求和评分标准对中国学生写作的影响”,《解放军外国语学院学报》,2002年第5期,第49-53页。
    蔡基刚,“大学英语四、六级计算机口语测试效度、信度和可操作性研究”,《外语界》, 2005年第4期,第66-75页。
    蔡基刚,“英语教学与英语写作中的汉式英语”,《外语界》,1995年第3期,第36-40页。
    邓志勇“,英语写作教学的社会认知模式”《,现代外语》,2002年第4期,第408-419页。
    董亚芬,“我国英语教学应始终以读写为本“,《外语界》,2003年第1期,第53-56页。
    黄伟新,“从一次写作练习看当前英语专业学生的‘营养不良’”,《现代外语》,1996年第4期,第38-42页。
    胡新颖,“过程写作法及其应用”,《外语与外语教学》,2003年第9期,第59-60页。
    简庆闽,陆建平,“关于大学英语考试写作部分标准的思考”,《外语教学与研究》,2000年第3期,第219-224页。
    蒋家平,“努力提高学生的英语写作能力”,《外语界》,1995年第4期,第23-26页。
    金艳,“大学英语四、六级考试口语考试对教学的反拨作用”,《外语界》,2000年第4期,第34-39页。
    金艳,郭杰克,“大学英语四、六级考试非面试型口语考试效度研究”,《外语界》,2002年第5期,第72-79页。
    李福祥,“精读课中的写作教学”,《外语界》,1997年第2期,第52-55页。
    李森,“改进英语写作教学的重要举措:过程教学法”,《外语界》,2000年第1期,第19-23页。
    林肖喻,“阅读?讨论?写作-关于大学英语写作教学”,《外语界》,1996年第1期,第39-42页。
    刘艳萍,“英语口试的反拨作用及对口语教学的启示”,《外语电化教学》,2002年第8期,第52-55页。
    吕长斌,宋冰,王焰,刘文丽,黎斌,“大学英语口语侧试任务的效度研究”,《外语界》,2006年第3期,第23-29页。
    马广惠,文秋芳,“大学生英语写作能力的影响因素研究”,《外语教学与研究》,1999年第4期,第34-39页。
    潘绍嶂,“大学英语写作中的问题与对策”,《外语界》,1992年第1期,第23-26页。
    王士先,“大学英语写作初探”,《外语界》,1992年第3期,第31-35页。
    王育祥,“影响写作能力的超语言制约因素及其对策”,《外语学刊》,1997年第1期,第72-74页。
    文秋芳,“母语思维与外语写作能力的关系”,《现代外语》,2008年第4期,第44-56页。
    文秋芳,《英语口语测试与教学》,上海:上海外语教育出版社,1999。
    乌永志,“英语写作常见问题分析与训练”,《外语教学》,2000年第3期,第81-86页。
    夏纪梅,《现代外语课程设计理论与实践》,上海:上海外语教育出版社,2003。
    熊敦礼,陈玉红,刘泽华,黄更新,“大学英语大规模录音口语测试的研究”,《外语教学与研究》,2002年第4期,第33-38页。
    杨传普,“外语教学中的‘说’与‘写’”,《外语与外语教学》,1998年第3期,第22-24页。
    杨惠中,“大学英语口语考试设计原则”,《外语界》,1999年第3期,第13-19页。
    杨敬清,“提高英语写作评改有效性的反馈机制”,《外语界》,1996年第3期,第41-45页。
    杨玉晨,闻兆荣,“中国学生英文写作的句子类型及分析”,《现代外语》,1994年第1期,第39-42页。
    易千红,曾路,“口语测试中的评分模板设置与应用”,《现代外语》,2004年第1期,第21-27页
    曾祥娟,“体裁分析与科技英语写作教学”,《外语教学》,2001年第5期,第51-55页。
    张吉生,周平,“英语写作教学法中‘结果法’与‘过程法’的对比研究”,《外语与外语教学》,2002年第9期,第19-22页。
    张文忠,郭晶晶,“模糊评分:外语口语测试评分新思路”,《现代外语》,2002年第1期,第18-24页
    张中载,“用外语写出好文章”,《外语教学与研究》,1981年第2期,第24-29页。
    邹申,“试论口语测试的真实性”,《外语界》,2001年第3期,第74-78页。
    左年念,“外语作文评阅与学生写作能力提高之间的关系--研究综述”,《外语教学与研究》,2002年第5期,第355-359页。

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700