Papers
arxiv:2604.02648

GBQA: A Game Benchmark for Evaluating LLMs as Quality Assurance Engineers

Published on Apr 3
· Submitted by
Shufan Jiang
on Apr 8
Authors:
,

Abstract

Large language models struggle with autonomous bug discovery in complex runtime environments, as demonstrated by a new game development benchmark that reveals limited effectiveness of current approaches despite sophisticated multi-agent systems and interactive agents.

AI-generated summary

The autonomous discovery of bugs remains a significant challenge in modern software development. Compared to code generation, the complexity of dynamic runtime environments makes bug discovery considerably harder for large language models (LLMs). In this paper, we take game development as a representative domain and introduce the Game Benchmark for Quality Assurance (GBQA), a benchmark containing 30 games and 124 human-verified bugs across three difficulty levels, to evaluate whether LLMs can autonomously detect software bugs. The benchmark is constructed using a multi-agent system that develops games and injects bugs in a scalable manner, with human experts in the loop to ensure correctness. Moreover, we provide a baseline interactive agent equipped with a multi-round ReAct loop and a memory mechanism, enabling long-horizon exploration of game environments for bug detection across different LLMs. Extensive experiments on frontier LLMs demonstrate that autonomous bug discovery remains highly challenging: the best-performing model, Claude-4.6-Opus in thinking mode, identifies only 48.39% of the verified bugs. We believe GBQA provides an adequate testbed and evaluation criterion, and that further progress on it will help close the gap in autonomous software engineering.

Community

Paper author Paper submitter

Accepted as a workshop paper at the Fourteenth International Conference on Learning Representations (ICLR 2026)

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.02648
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.02648 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.02648 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.02648 in a Space README.md to link it from this page.

Collections including this paper 1