Presentation

Measures of Intelligibility in Different Varieties of English: Human vs. Machine

Authors
  • David O. Johnson (University of Kansas)
  • Okim Kang (Northern Arizona University)

Abstract

This paper demonstrates the feasibility of a tool for measuring the intelligibility of English speech utilizing an automated speech system. The system was tested with eighteen speakers from countries representing six Englishes (American, British, Indian, South African, Chinese, and Spanish) who were carefully selected to represent a range of intelligibility. Intelligibility was measured via two different methods: transcription and nonsense. A computer model developed for automated oral proficiency scoring based on suprasegmental measures was adapted to predict intelligibility scores. The Pearson’s correlation between the human assessed and computer predicted scores was 0.819 for the nonsense construct and 0.760 for the transcription construct. The inter-rater reliability Cronbach’s alpha for the nonsense intelligibility scores was 0.956 and 0.932 for the transcription scores. Depending on the type of intelligibility measure, the computer utilized different suprasegmental measures to predict the score. The computer employed 11 measures for the nonsense intelligibility score and eight for the transcription score. Only two features were common to both constructs. These results can lead L2 researchers to different perspectives of measuring intelligibility in future research.

How to Cite:

Johnson, D. O. & Kang, O., (2016) “Measures of Intelligibility in Different Varieties of English: Human vs. Machine”, Pronunciation in Second Language Learning and Teaching Proceedings 8(1).

Downloads:
Download PDF
View PDF

Published on
01 Jan 2017
License