Abstract
Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during large language model (LLM) training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing COUNTERFACT benchmark to include a dynamic component and dub our benchmark COUNTERFACT+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics, ACL 2023 |
| Publisher | Association for Computational Linguistics |
| Pages | 11548-11559 |
| Number of pages | 12 |
| ISBN (Electronic) | 9781959429623 |
| DOIs | |
| Publication status | Published - 9 Jul 2023 |
| Event | 61st Annual Meeting of the Association for Computational Linguistics 2023 - Toronto, Canada Duration: 9 Jul 2023 → 14 Jul 2023 |
Conference
| Conference | 61st Annual Meeting of the Association for Computational Linguistics 2023 |
|---|---|
| Abbreviated title | ACL 2023 |
| Country/Territory | Canada |
| City | Toronto |
| Period | 9/07/23 → 14/07/23 |
ASJC Scopus subject areas
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics