Abstract
This article explores the development and application of rubrics to assess an experimental corpus of Auslan (Australian Sign Language)/English simultaneous interpreting performances in both language directions. Two rubrics were used, each comprising four main assessment criteria (accuracy, target text features, delivery features and processing skills). Three external assessors - two interpreter educators and one interpreting practitioner - independently rated the interpreting performances. Results reveal marked variability between the raters: inter-rater reliability between the two interpreter educators was higher than between each interpreter educator and the interpreting practitioner. Results also show that inter-rater reliability regarding Auslan-to-English simultaneous interpreting performance was higher than for English-to-Auslan simultaneous interpreting performance. This finding suggests greater challenges in evaluating interpreting performance from a spoken language into a signed language than vice versa. The raters' testing and assessment experience, their scoring techniques and the rating process itself may account for the differences in their scores. Further, results suggest that assessment of interpreting performance inevitably involves some degree of uncertainty and subjective judgment.
Original language | English |
---|---|
Pages (from-to) | 83-103 |
Number of pages | 21 |
Journal | Interpreter and Translator Trainer |
Volume | 9 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2015 |
Keywords
- raters
- assessment rubrics
- scoring process and techniques
- inter-rater reliability
- signed language interpreting
- ENGLISH INTERPRETERS