TY - UNPB
T1 - Playpen
T2 - An Environment for Exploring Learning Through Conversational Interaction
AU - Horst, Nicola
AU - Mazzaccara, Davide
AU - Schmidt, Antonia
AU - Sullivan, Michael
AU - Momentè, Filippo
AU - Franceschetti, Luca
AU - Sadler, Philipp
AU - Hakimov, Sherzod
AU - Testoni, Alberto
AU - Bernardi, Raffaella
AU - Fernández, Raquel
AU - Koller, Alexander
AU - Lemon, Oliver
AU - Schlangen, David
AU - Giulianelli, Mario
AU - Suglia, Alessandro
N1 - Accepted at EMNLP 2025 (Main) Source code: https://github.com/lm-playpen/playpen Please send correspodence to: [email protected]
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a model's response. In this paper, we investigate whether Dialogue Games -- goal-directed and rule-governed activities driven predominantly by verbal actions -- can also serve as a source of feedback signals for learning. We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play, and investigate a representative set of post-training methods: supervised fine-tuning; direct alignment (DPO); and reinforcement learning with GRPO. We experiment with post-training a small LLM (Llama-3.1-8B-Instruct), evaluating performance on unseen instances of training games as well as unseen games, and on standard benchmarks. We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills, while interactive learning with GRPO shows balanced improvements without loss of skills. We release the framework and the baseline training setups to foster research in the promising new direction of learning in (synthetic) interaction.
AB - Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a model's response. In this paper, we investigate whether Dialogue Games -- goal-directed and rule-governed activities driven predominantly by verbal actions -- can also serve as a source of feedback signals for learning. We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play, and investigate a representative set of post-training methods: supervised fine-tuning; direct alignment (DPO); and reinforcement learning with GRPO. We experiment with post-training a small LLM (Llama-3.1-8B-Instruct), evaluating performance on unseen instances of training games as well as unseen games, and on standard benchmarks. We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills, while interactive learning with GRPO shows balanced improvements without loss of skills. We release the framework and the baseline training setups to foster research in the promising new direction of learning in (synthetic) interaction.
KW - cs.CL
M3 - Preprint
BT - Playpen
PB - arXiv
ER -