Presentation

Pronunciation Learning Trough Captioned Videos

Authors: ,

Abstract

The current study investigates L2 learners’ skills at integrating auditory and orthographic input while reading dynamic texts in L2-captioned video, as part of a broader research project investigating the role of exposure to L2-captioned video in L2 pronunciation development. Within this broader research goal, the eye movements of L1-Catalan/Spanish learners of L2-English (n=38) were recorded while watching short L2-captioned video clips. The Reading Index for Dynamic Text (Kruger & Steyn, 2013) was used as a measure of learners’ amount of text processing, and an index of text-sound integration was computed by calculating the extent to which fixations on selected words synchronized with their auditory onsets. We also explored learners’ individual differences in text-sound integration through a novel task that required learners to uncover text-sound mismatches. In addition, we measured learners’ L2 segmentations skills through a word-spotting task (McQueen, 1996) and L2 proficiency through an Elicited Imitation Task (Ortega et al., 2002). The results shed light on the relationship between reading and audio-text integration skills, suggesting that efficient reading might be what leads to modality integration.

Keywords:

How to Cite: Wisniewska, N. & Mora, J. C. (2017) “Pronunciation Learning Trough Captioned Videos”, Pronunciation in Second Language Learning and Teaching Proceedings. 9(1).