Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests

Filippo Momentè*, Alessandro Suglia, Mario Giulianelli, Ambra Ferrari, Alexander Koller, Oliver Lemon, David Schlangen, Raquel Fernández, Raffaella Bernardi

*Corresponding author for this work

Research output: Working paperPreprint

11 Downloads (Pure)

Abstract

We examine three evaluation paradigms: large question-answering benchmarks (e.g., MMLU and BBH), interactive games (e.g., Signalling Games or Taboo), and cognitive tests (e.g., for working memory or theory of mind). First, we investigate which of the former two-benchmarks or games-is most effective at discriminating LLMs of varying quality. Then, inspired by human cognitive assessments, we compile a suite of targeted tests that measure cognitive abilities deemed essential for effective language use, and we investigate their correlation with model performance in benchmarks and games. Our analyses reveal that interactive games are superior to standard benchmarks in discriminating models. Causal and logical reasoning correlate with both static and interactive tests, while differences emerge regarding core executive functions and social/emotional skills, which correlate more with games. We advocate the development of new interactive benchmarks and targeted cognitive tasks inspired by assessing human abilities but designed specifically for LLMs.
Original languageEnglish
PublisherarXiv
DOIs
Publication statusPublished - 20 Feb 2025

Keywords

  • cs.CL

Fingerprint

Dive into the research topics of 'Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests'. Together they form a unique fingerprint.

Cite this