OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational Agents
Memory-augmented conversational agents enable personalized interactions using long-term user memory and have gained substantial traction. However, existing benchmarks primarily focus on whether agents can recall and apply user information, while overlooking whether such personalization is used appropriately. In fact, agents may overuse personal information, producing responses that feel forced, intrusive, or socially inappropriate to users. We refer to this issue as over-personalization. In this work, we formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy, and introduce OP-Bench a benchmark of 1,700 verified instances constructed from long-horizon dialogue histories. Using OP-Bench, we evaluate multiple large language models and memory-augmentation methods, and find that over-personalization is widespread when memory is introduced. Further analysis reveals that agents tend to retrieve and over-attend to user memories even when unnecessary. To address this issue, we propose Self-ReCheck, a lightweight, model-agnostic memory filtering mechanism that mitigates over-personalization while preserving personalization performance. Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.
