Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark

  • Jason Hoelscher-Obermaier
  • , Julia H. Persson
  • , Esben Kran
  • , Ioannis Konstas
  • , Fazl Barez*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

27 Citations (Scopus)
1 Downloads (Pure)

Abstract

Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during large language model (LLM) training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing COUNTERFACT benchmark to include a dynamic component and dub our benchmark COUNTERFACT+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics, ACL 2023
PublisherAssociation for Computational Linguistics
Pages11548-11559
Number of pages12
ISBN (Electronic)9781959429623
DOIs
Publication statusPublished - 9 Jul 2023
Event61st Annual Meeting of the Association for Computational Linguistics 2023 - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023

Conference

Conference61st Annual Meeting of the Association for Computational Linguistics 2023
Abbreviated titleACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23

ASJC Scopus subject areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark'. Together they form a unique fingerprint.

Cite this