한국 영어 학습자의 유창성과 발음에 대한 비원어민 채점자의 이해 및 판단 기준 분석
Abstract
Song, Min-Young. 2017. Nonnative raters’ perceptions and judgments of Korean English learners’ fluency and pronunciation level.Korean Journal of English Language and Linguistics 17-4, 787-815. This study aims to investigate how nonnative English raters understand the concept of 'fluency' and 'pronunciation', and what criteria they apply in evaluating Korean English learners’ speaking performance. Most of the participants in this study understood the main concepts of fluency defined in NEAT and used them in oral reports. However, rather than judging the absolute levels of fluency applying those concepts, they determined fluency scores considering other factors such as task completion, amount of utterance or the level of the test concerned. Especially, they tended to judge fluency levels in relation to task completion levels. As for pronunciation, most of the participants did not consider the main concepts of pronunciation defined in NEAT. Instead, they applied ‘intelligibility’ as the only crucial criterion in determining pronunciation scores. In particular, some participants applied this criterion too generously to discriminate pronunciation levels appropriately. Also, many participants tended to adjust pronunciation scores based on task completion scores. In conclusion, although the participants of this study were relatively reliable and experienced scorers, most of them did not apply appropriate scoring criteria in evaluating Korean students’ English fluency and pronunciation levels
Keywords:
fluency, pronunciation, nonnative raters reports, rater verbal, analytic scoring, speaking assessment참고문헌
- Bosker, H., A. Pinget, H. Quene, T. Sanders and N. deJong. 2013. What makes speech sound fluent: The contribution of pauses, speed and repairs. Language Testing 30(2), 159-175. [https://doi.org/10.1177/0265532212455394]
- Brown, A. 1995. The effect of rater variables in the development of an occupation-specific language performance test. Language Testing 12(1), 1-15. [https://doi.org/10.1177/026553229501200101]
- Brown, A. 2005. Interviewer Variability in Oral Proficiency Interviews. Frankfurt, Germany: Peter Lang.
- Campione, E. and J. Véronis. 2002. A large-scale multilingual study of silent pause duration. Paper presented at the Speech Prosody Conference, Aix-en-Provence, France.
- Chalhoub-Deville, M. 1995. Deriving oral assessment scales across different tests and rater groups. Language Testing 12(1), 16–33. [https://doi.org/10.1177/026553229501200102]
- Davis, L. 2016. The influence of training and experience on rater performance in scoring spoken language. Language Testing 33(1), 117-135. [https://doi.org/10.1177/0265532215582282]
- Derwing, T., M. Rossiter, M. Munro and R. Thomson. 2004. Second language fluency: Judgments on different tasks. Language Learning 54(4), 655-679. [https://doi.org/10.1111/j.1467-9922.2004.00282.x]
- Ginther, A., S. Dimova and R. Yang. 2010. Conceptual and empirical relationships between temporal measures of fluency and oral English proficiency with implications for automated scoring. Language Testing 27(3), 379-399. [https://doi.org/10.1177/0265532210364407]
- Jenkins, J. 2000. The Phonology of English as an International Language. Oxford: Oxford University Press.
- Kim, H. J. 2010. An investigation of novice, developing, and experienced raters’ rating patterns on a second language speaking assessment. Korean Journal of Applied Linguistics 26(4), 151-182.
- Kim, Y. 2009. An investigation into native and non-native teachers’ judgments of oral English performance: A mixed methods approach. Language Testing 26(2), 187-217. [https://doi.org/10.1177/0265532208101010]
- Kormos, J. and M. Denes. 2004. Exploring measures and perceptions of fluency in the speech of second language learners. System 32, 145-164. [https://doi.org/10.1016/j.system.2004.01.001]
- Lee, J. O., H. W. Lim and H. J. Kim. 2014. An investigation into native English speaking and Korean raters’ judgments of Korean English learners’ pronunciations. Modern English Education 15(1), 195-216.
- Meiron, B. E. and L. Schick. 2000. Ratings, raters, and test performance: An exploratory study. In A. J. Kunnan ed., Fairness and Validation in Language Assessment: Selected Papers from the 19th Language Testing Research Colloquium, Orlando, Florida, 153-176. Cambridge: Cambridge University Press.
- Merion, B. E. 1998. Rating oral proficiency tests: A triangulated study of rater thought processes. Paper presented at the Language Testing Research Colloquium, Monterey, CA.
- Orr, M. 2002. The FCE speaking test: Using rater reports to help interpret test scores. System 30(2), 143-154. [https://doi.org/10.1016/S0346-251X(02)00002-7]
- Papajohn, D. 2002. Concept mapping for rater training. TESOL Quarterly 36(2), 219-233. [https://doi.org/10.2307/3588333]
- Raupach, M. 1987. Procedural learning in advanced learners of a foreign language. In J. A. Coleman and R. Towell, eds., The Advanced Language Learner, 123-155. London: CILT.
- Rossiter, M. J. 2009. Perceptions of L2 fluency by native and nonnative speakers of English. Canadian Modern Language Review 65(3), 395-412. [https://doi.org/10.3138/cmlr.65.3.395]
- Song, M. Y. and Y. S. Lee. 2015. Scoring behaviors of English speaking raters: Suggestions for rater training. Journal of Research in Curriculum & Instruction 19(4), 1081-1101. [https://doi.org/10.24231/rici.2015.19.4.1081]
- Yun, W. 2009a. Discrepancy between Korean and native English raters evaluating the English pronunciation spoken by Korean learners of English. The Journal of Linguistic Science 48,201-217.
- Yun, W. 2009b. An analysis of the Korean inter-rater difference in evaluating English pronunciations of Korean speakers. Studies in Foreign Language Education 23(2), 85-103. [https://doi.org/10.16933/sfle.2009.23.2.85]
- Zhang, Y. and Elder, C. 2011. Judgments of oral proficiency by non-native and native English speaking teacher raters: Competing or complementary constructs? Language Testing 28(1), 31-50. [https://doi.org/10.1177/0265532209360671]