TY - GEN
T1 - Temporal and Second Language Influence on Intra-Annotator Agreement and Stability in Hate Speech Labelling
AU - Abercrombie, Gavin
AU - Hovy, Dirk
AU - Prabhakaran, Vinodkumar
N1 - Funding Information:
Gavin Abercrombie was supported by the EP-SRC projects ‘Gender Bias in Conversational AI’ (EP/T023767/1) and ‘Equally Safe Online’ (EP/W025493/1). His visit to Bocconi University was funded by a Scottish Informatics and Computer Science Alliance (SICSA) PECE travel grant. Dirk Hovy received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR), and by Fondazione Cariplo (grant No. 2020-4288, MONICA). He is a member of the MilaNLP group and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis. This work was also partly funded by Google Research.
Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023/7/13
Y1 - 2023/7/13
N2 - Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator’s labels are temporally stable, although the reality is that human judgements are rarely consistent over time. As a subjective annotation task, hate speech labels depend on annotator’s emotional and moral reactions to the language used to convey the message. Studies in Cognitive Science reveal a ‘foreign language effect’, whereby people take differing moral positions and perceive offensive phrases to be weaker in their second languages. Does this affect annotations as well? We conduct an experiment to investigate the impacts of (1) time and (2) different language conditions (English and German) on measurements of intra-annotator agreement in a hate speech labelling task. While we do not observe the expected lower stability in the different language condition, we find that overall agreement is significantly lower than is implicitly assumed in annotation tasks, which has important implications for dataset reproducibility in NLP.
AB - Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator’s labels are temporally stable, although the reality is that human judgements are rarely consistent over time. As a subjective annotation task, hate speech labels depend on annotator’s emotional and moral reactions to the language used to convey the message. Studies in Cognitive Science reveal a ‘foreign language effect’, whereby people take differing moral positions and perceive offensive phrases to be weaker in their second languages. Does this affect annotations as well? We conduct an experiment to investigate the impacts of (1) time and (2) different language conditions (English and German) on measurements of intra-annotator agreement in a hate speech labelling task. While we do not observe the expected lower stability in the different language condition, we find that overall agreement is significantly lower than is implicitly assumed in annotation tasks, which has important implications for dataset reproducibility in NLP.
UR - http://www.scopus.com/inward/record.url?scp=85174838161&partnerID=8YFLogxK
U2 - 10.18653/v1/2023.law-1.10
DO - 10.18653/v1/2023.law-1.10
M3 - Conference contribution
AN - SCOPUS:85174838161
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 96
EP - 103
BT - Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)
A2 - Prange, Jakob
A2 - Friedrich, Annemarie
PB - Association for Computational Linguistics
T2 - 17th Linguistic Annotation Workshop 2023
Y2 - 13 July 2023
ER -