text
string
source
string
arXiv:2505.18126v1 [cs.LG] 23 May 2025Reward Model Overoptimisation in Iterated RLHF Lorenz Wolf∗ UCL Centre for Artificial Intelligence Department of Computer Science University College LondonRobert Kirk UK AI Security Institute Mirco Musolesi UCL Centre for Artificial Intelligence Department of Computer Science University College London Department of Computer Science and Engineering University of Bologna Abstract Reinforcement learning from human feedback (RLHF) is a widely used method for aligning large language models with human preferences. However, RLHF often suffers from reward model overoptimisation, in which models overfit to the reward function, resulting in non-generalisable policies that exploit the idiosyncrasies and peculiarities of the reward function. A common mitigation is iterated RLHF , in which reward models are repeatedly retrained with updated human feedback and policies are re-optimised. Despite its increasing adoption, the dynamics of overoptimisation in this setting remain poorly understood. In this work, we present the first comprehensive study of overoptimisation in iterated RLHF. We systematically analyse key design choices - how reward model training data is transferred across iterations, which reward function is used for optimisation, and how policies are initialised. Using the controlled AlpacaFarm benchmark, we observe that overoptimisation tends to decrease over successive iterations, as reward models increasingly approximate ground-truth preferences. However, performance gains diminish over time, and while reinitialising from the base policy is robust, it limits optimisation flexibility. Other initialisation strategies often fail to recover from early overoptimisation. These findings offer actionable insights for building more stable and generalisable RLHF pipelines. 1 Introduction Reinforcement learning from human feedback (RLHF) has become the standard method for aligning large language models with human preferences [ 46,27,2]. However, RLHF faces a critical vulner- ability: reward model overoptimisation [ 11]. As fine-tuning progresses, models learn to overfit to the trained reward function - achieving high scores without genuinely satisfying human intent. This creates brittle policies that exploit loopholes rather than developing robust behaviours, leading to systems that appear aligned during training but fail catastrophically when deployed. Iterated RLHF represents a promising approach to combat this problem. By repeatedly collecting new preferences on the latest policy outputs, retraining the reward model, and fine-tuning the policy [ 2,41], practitioners aim to iteratively close the gap between proxy and true reward. Despite its widespread adoption in industry [ 46,27,2], it remains uncertain whether iterated RLHF genuinely resolves overoptimisation, merely postpones the inevitable exploitation of the reward model akin to persistent adversarial policies [12], or perpetuates a recurring cycle of overoptimisation in different forms [35]. ∗Corresponding author: lorenz.wolf.22@ucl.ac.uk } Preference Data Collectionyi,0,yi,1∼πk−1(⋅|xi)Gold Rew. Model˜Dk={xi,yi,0,yi,1,pi}Ni=1 Reward Model TrainingDkReward Model Trainer˜Rk Policy OptimisationRkπkπinitk[π1⋮πk−1][˜Dk][˜Rk]˜R1⋮˜Rk−1}RL Trainerπinitk[π1⋮πk]Across iterations˜D1⋮˜Dk−1 ConcatenateRkDkDesign choicesWithin iterationsFigure 1: The Iterated RLHF framework performs multiple rounds of preference data collection, reward modelling, and policy optimisation. Our research reveals three design choices that dramatically impact performance: (1) how preference data is managed across iterations , (2) which reward function formulation to optimise , and (3) how policies are initialised at each stage . Effectively configuring these elements can significantly reduce overoptimisation while improving true reward. In this work, we present the first systematic investigation into reward model overoptimisation in iterated
https://arxiv.org/abs/2505.18126v1
RLHF. We identify three pivotal design choices, highlighted in Figure 1, that critically influence the success or failure of the process: preference data management (i.e., whether to aggregate or isolate preference data across iterations), reward function formulation (i.e., the choice of reward signal to optimize in subsequent training rounds), and policy initialisation (i.e., the strategy for initialising the policy at the start of each fine-tuning cycle). 0 1 2 3 4 Iteration0.3 0.2 0.1 0.00.10.20.30.40.5Gold Score T ake last Data Sample Concat Data / Policy from SFT EnsembleWorst-Case Optimization Weight Average Concat Data + LITI Sample + T ake last Policy Figure 2: Our design choices for preference data management , reward function formulation , and policy initialisation significantly impact performance across iterations. Concatenating preference data yields dramatic improvements in true reward scores, while equal-sampling with last-policy ini- tialization performs worst (see Section 5). These findings offer concrete guidance for practitioners implementing robust RLHF systems.Our key contributions can be summarised as: •We present the first formal investigation of overoptimisation dynamics across multiple RLHF iterations, relaxing assumptions made in previous work. •We discuss a systematic evaluation of key design choices with quantitative evidence of their impact on performance and overoptimi- sation. •We provide guidelines for practitioners im- plementing iterated RLHF, including specific recommendations for preference data manage- ment, reward function selection, and policy initialisation strategies. Using a gold-standard reward model to simulate human labellers [ 5,11] on the AlpacaFarm dataset [ 38] and working exclusively with open-source models, our experiments yield several key insights: Reward models become increasingly robust across iterations, leading to higher gold reward scores (Figure 2). Perfor- mance gains diminish after three iterations for most methods. Concatenating preference data across iterations dramatically outperforms other approaches. Small but persistent overoptimisa- tion remains after four iterations regardless of design choices. Our results demonstrate that while iterated RLHF significantly improves reward model robustness, it does not fully eliminate overoptimisation. This underscores the need for continued research into more robust alignment methods that can withstand sophisticated specification gaming [ 16] by increasingly capable language models. 2 2 Related work RLHF is the standard for aligning large language models to human preference data. The iterated approach has been first used by the authors in [ 2] to fix robustness and calibration issues, attributed to lack of data in the high score regime and has since gained in popularity [ 32,41,44,1,7,45]. Besides training on newly collected preferences, an iterated scheme to train reward models from synthetically generated preference data has been proposed in [ 39] and shown to improve performance on the reward model benchmark RewardBench [ 18], but the authors focus on iterated training of an evaluator and do not study overoptimisation nor the design choices we consider. In the context of Direct Preference Optimisation (DPO) [ 31] offline, online and hybrid approaches repeatedly collecting new preference data have been investigated mostly in terms of sample efficiency [ 41,6,26,23]. More broadly iterated methods have been investigated for machine teaching [ 19] and to resolve feedback loops caused by model deployment in supervised learning [29] and
https://arxiv.org/abs/2505.18126v1
also performative RL [22]. Overoptimisation is a common issue in RL, and evidence of this has been frequently reported in the RLHF literature [ 46,37,11,35]. A promising method to mitigate overoptimisation is using reward model ensembles combined with conservative optimisation as proposed by [ 5]. Several works further explore reward model ensembles in RLHF [ 9,21]. Notably, in [ 33] the authors introduce weight averaged reward models (WARM) alleviating the inference cost of multiple reward models during training. Following [ 5] and [ 11] in tackling reward model overoptimisation, several works propose alternative approaches including reward model distillation [ 10], hidden state regularisation [ 43], and more [ 42,24,20,13]. One commonly reported mode of overoptimisation is length bias [ 35,28], which can be tackled by disentangling reward signals related to response length from content quality [4]. To the best of our knowledge, the literature lacks a systematic investigation into overoptimisation in iterated Reinforcement Learning from Human Feedback (RLHF). Such an investigation is not only necessary but also fundamentally important for a deeper understanding and meaningful improvement of fine-tuning methods based on this technique. 3 Iterated Reinforcement Learning from Human Feedback In this section, we first outline the process of a single iteration of RLHF and then extend it to the iterated framework. The RLH pipeline consists of the following three steps: 1. Collection of a preference dataset; 2. Reward model training; 3. Policy optimisation on the reward model. Though not an integral part of the RLHF pipeline, it is common in practice for step 1 to be preceded by supervised fine-tuning on labelled examples. 3.1 Single-iteration RLHF Preference data collection. We begin with a policy πsft, obtained by supervised fine-tuning (in practical terms, a policy checkpoint), and use it for collecting preference data. More specifically, the preference dataset Dconsists of tuples {xi, yi,0, yi,1, pi}fori= 1, ..., N , where xi∈ X is a prompt or instruction, yi,j∼πsft(·|xi)withj= 0,1are two synthesised responses from the supervised fine-tuned policy checkpoint πsft, and piis the preference label indicating whether yi,0is preferred overyi,1. Similarly to [ 5,11], the preference labels piare simulated with a gold reward model denoted by R⋆, which is significantly larger in terms of parameter size than the proxy reward models, as an approximation for human labellers that would be used in the real RLHF process instead. Reward model training. The proxy reward model Rϕis initialised from model checkpoint Rinit, with a randomly initialised prediction head, and subsequently trained by minimizing the cross-entropy loss on the preference dataset D. It is standard to use the Bradley-Terry model [ 3], under which the probability of preferring the answer y0overy1given prompt xis given by P(y0≻y1|x) =1 1 + exp ( R(x, y1)−R(x, y0)). (1) Policy optimisation. Having trained the proxy reward model Rϕ, the policy πθis initialised from πsftand then fine-tuned to optimise Rϕ. This is commonly achieved with the proximal policy optimization (PPO) algorithm [ 34]. In order to prevent overoptimisation of the proxy reward model and regularise πθto not diverge too drastically from its initialisation, a Kullback-Leibler divergence (KL) penalty is used. This
https://arxiv.org/abs/2505.18126v1
yields the overall reward maximised as RPPO(x, y) =Rϕ(x, y)−βlogπθ(y|x) πsft(y|x) , (2) 3 Algorithm 1 Iterated RLHF (design choices highlighted) 1:Inputs: Prompt dataset X={xi}N i=1, πsft,Rinit,R⋆, # of iterations niter 2:π0←πsft 3:fork= 1toniterdo 4: yi,0, yi,1∼πk−1(xi)∀xi∈X 5: pi←R⋆(xi, yi,0, yi,1)∀xi∈D 6: ˜Dk← {xi, yi,0, yi,1, pi}N i=1 7: Dk←CombineData([ ˜D1, ...,˜Dk]) 8: ˜Rk←TrainRM( Rinit,Dk) 9: Rk←CombineRM([ ˜R1, ...,˜Rk]) 10: πinit k←CombineΠ([ π0, ..., π k−1]) 11: πk←TrainRL( πinit k, Rk) 12:end for 13:return πk Preference DataReward ModelPolicy Initialisation˜D1⋮˜DkDk˜D1⋮˜Dk=Dk˜D1⋮˜DkDk˜R1⋮˜RkRkπsftπinitk[π1⋮πk−1]πinitkLITIa) Take lastb) Concatenatec) Sample˜R1⋮˜Rk˜r1(x,y)⋮˜rk(x,y)}rk(x,y)e) Ensemblef) Weight Averageh) Interpolationi) From SFTd) Take last[π1⋮πk−1]πinitkg) Take last˜R1⋮˜Rk]RkFigure 3: Design choices for Iterated RLHF (Al- gorithm 1). Options include how to combine pref- erence data (latest only, concat, or sample), trans- fer reward models (last, ensemble, or weight aver- aged), and initialize policies (last, interpolate, or from SFT). These choices determine how learning signals are propagated through each iteration. where βcontrols the strength of the KL penalty (unless specified otherwise we set β= 1×10−4). This procedure, which only collects preferences once in the entire pipeline, has an important disadvantage. Reward models have been found to be poorly calibrated in the higher reward regime [2] and trained policies overoptimise the proxy reward model leading to unstable fine-tuned policies [ 30,11,46]. Notably, policy optimization induces a divergence between the distributions πθ(y|x)andπsft(y|x). This causes the optimised policy to generate outputs that are different from those seen in the training dataD. As a result, the reward model Rϕ, which was trained on the data D, is now being evaluated on data that it has not seen before (out of distribution). 3.2 Iterated RLHF and design choices The problem of the divergence between the distributions πθ(y|x)andπsft(y|x)is the one addressed by iterated RLHF. In this process, multiple iterations of steps 1-3 of the RLHF pipeline (namely, collection of preference data, reward model training, and policy optimisation)are repeated as shown in Figure 1. Just as in the single-iteration setting, we start from the checkpoint πsftand initialise the reward model from Rinitwith a randomly initialised prediction head. However, there are multiple design choices to be made when choosing how exactly to perform iterated RLHF training. We now describe the process in more detail, highlighting the design choices throughout. Please refer to Algorithm 1 for a schematic of the entire process. For simplicity of notation, we omit explicit references to the policy and reward model parameters θandϕ, using the subscript kto index iterations instead. During the kthiteration of RLHF, we use the policy from the previous one, denoted by πk−1to synthesise pairs of responses for the new preference data denoted by ˜Dk. Indeed, using all policies is unnecessary as it equates to reapplying preference data, but at a higher cost. This new data enables the training of a proxy reward model for which the current policy’s output is in-distribution, potentially mitigating the issue of overoptimisation. Taking into account previous iterations, we now have access to the list of preference data [˜D1, ...,˜Dk]. Here we face the first design choice: How do we combine the list of kpreference datasets into a single training dataset Dk? Combining preference
https://arxiv.org/abs/2505.18126v1
data. Given a list of kpreference datasets, the responses in each of which have been generated by different policies π1, ..., π k−1, we identify three possible options to consolidate them into a single training dataset. The first option (Figure 3.a) is to simply set Dk=˜Dk, only training the reward model on the preference data collected in the current iteration. The second option at the other extreme (i.e., no inter-iteration transfer) is to concatenate all datasets (Figure 3.b). 4 Reusing all the data at each iteration is expected to result in decreased overoptimisation and better approximation with respect to the true reward function. However, this comes with a reward model training computational cost that scales linearly with the number of iterations. Finally, balancing training time and information transfer, we keep the size of the reward model training data constant across iterations by sampling a subsets ˜Difori= 1, ..., k and concatenating the subsets to form Dk(Figure 3.c). Once the training data Dkhas been obtained, the proxy reward model ˜Rkcan be trained on it. ˜Rkis initialised from the same base model in all iterations. Having trained the reward model, we now arrive at the second critical design choice: How do we transfer information from the list of all previously trained proxy reward models [˜R1, ...,˜Rk]into a single reward function Rkthat can be optimised by the policy? Combining reward models. The reward model is the crucial piece in obtaining generalisable and consistent policies in RLHF, and it is even more important over multiple iterations as effects compound. Given the list [˜R1, ...,˜Rk]containing the kproxy reward models leading up to the current iteration the task is to obtain a robust reward function to be optimised. We note that this design choice can be considered in parallel to the combination of preference data, as both target the same outcome of transferring information from previous iterations to the reward function. To achieve this task we investigate three types of solutions. The first only uses the most recently trained proxy reward model setting Rk=˜Rk(Figure 3.d), hence there is no utilisation of previously trained reward models. In contrast, the second option ensembles all previously trained proxy RMs tak- ing the mean of the individual rewards (Figure 3.e) [ 5]. Since reward model ensembles showed limited improvements in [ 5] we also evaluate worst-case optimisation (WCO), i.e. optimising the minimum Rk(x, y) = min i=1,...,k˜Ri(x, y). This option comes with the disadvantage of requiring inference on kre- ward models in parallel. To address the computational cost, we also consider weight averaged reward models (see Figure 3.f) by performing task arithmetic [ 15]. More formally, given a sequence of reward models ˜R1, ...,˜Rk, which are parameterised by ˜ϕ1, ...,˜ϕk, respectively, we obtain the proxy reward function Rkparameterised by ϕkas follows: The ensemble uses Rk(x, y) =Pk i=1˜Ri(x,y) kand to ob- tain the weight averaged reward model we set ϕk=Pk i=1˜ϕi k. Having obtained the reward function, the next and final step of each iteration is to optimise it, which leads us to the third and final design choice: Given πsftand the
https://arxiv.org/abs/2505.18126v1
fine-tuned policies π1, ..., π k−1, how can we choose πinit kto balance efficiency and robustness against overoptimisation? Policy initialisation. The final design choice concerns the initialisation of the policy, i.e., how πinit k is chosen. The authors of [ 2] initialise the policy from πsftat every iteration, not taking into consid- eration previously performed computation. We call this initialisation From SFT shown in Figure 3.i. As alternative, we use linear interpolation towards initialisation ( LITI) [32], which was inspired by WiSE-FT proposed by [ 40]. With LITI, shown in Figure3.h, we set πinit k= (1−η)πinit k−1+ηπk−1, where ηis a hyperparameter balances the optimisation of Rk−1. Taking η= 1 corresponds to initialising the current policy from the previously fine-tuned one, setting πinit k=πk−1. Since con- tinuing fine-tuning of the most recent policy fully relies on the previous iterations, it may suffer from entropy collapse leading to no optimisation in later iterations. Continuing with the fine-tuned policy carries risks if undesirable behaviour learned in previous iterations cannot be unlearned. Note, when performing LITI, the policy is regularised with the KL between the policy and its initialisation πinit k. 4 Evaluating overoptimisation in Iterated RLHF In Section 3 we formalised the process of iterated RLHF and highlighted the critical design choices. In this section, we detail our evaluation setup, emphasizing the quantification of overoptimisation and examining how its progression over iterations is influenced by different design choices. 4.1 Training setup Our evaluation setup follows extensive prior works that study overoptimisation in the single iteration RLHF in a controlled and simulated manner [ 5,11]. Similarly to [ 5] we use instructions from theAlpacaFarm dataset [ 8] for reward model training and policy optimisation. The preference data ˜Dkcollected at each iteration contains preferences for a subset of 1000 instructions in the 5 preference split of AlpacaFarm. Preference labels piare simulated with the 7 billion parameter Reward-Model-AlpacaFarm-Human [8], which is also used by [ 5]. It is worth noting again the significant difference in parameter sizes between the proxy reward models and the gold reward model, justifying the use of the gold reward model as a proxy for human labellers. Similarly to [ 5], to obtain πsft, we performed supervised fine-tuning on the pythia- 410mmodel2on the AlpacaFarm SFT split. We chose pythia- 410mas it achieves an appropriate balance between computational cost and experimental rigour for our investigation. The authors in [ 11] also found that policy size did not affect the shape of the overoptimisation curve in their setting, further justifying this choice of policy. We initialise proxy reward models ˜Rkfrom the huggingface checkpoint tlc4418/pythia_70m_sft provided by [ 5]3, as well as the larger pythia- 160m, with a randomly initialised prediction head [ 5]. We train reward models for 5epochs with a learning rate of 1×10−5[5]. For policy optimisation, we perform 6000 steps of PPO on the unlabelled split of AlpacaFarm. The learning rate is set to 1×10−6 and a constant KL penalty of 1×10−4is used. The full specifications of the hyperparameters for reward model training and policy optimisation, and the prompt format are given in Appendix
https://arxiv.org/abs/2505.18126v1
C. We perform a total of 4 iterations per method and report the results of the final iteration in comparison to the initial one. All results presented in our performance evaluation are reported for 8 random seeds, except for policy initialisation From SFT with the Take last configuration for both preference data and reward model, for which we only obtained 4 random seeds due to compute constraints. We note that this is still above the commonly reported 3 random seeds. To aggregate seeds in both gold score and KL we collect all seeds per iteration, bucket data points by KL. We then plot the mean and standard deviation of the gold rewards per bucket against the KL. 4.2 Measuring reward model overoptimisation with the Maximum Mean Discrepancy The standard methodology for investigating reward model overoptimisation is to compare the achieved mean reward on the proxy and gold reward functions on a hold-out set [ 5,25,11]. However, this analysis does not capture discrepancies in the long-tail, i.e., in the high-reward regime, that have a larger impact on the policy optimisation. In this work we propose to compare two reward models based on their distributions of rewards. To this end, we evaluate the policy and reward models on the 2000 unseen instructions contained in the validation split of AlpacaFarm at every 300steps during policy optimisation. Our approach to measuring differences between reward functions consists of two steps, the first of which is a standardisation that ensures reward functions that lead to the same ordering of policies when optimised are treated as equal (see Apendix B). In the second step, we use the maximum mean discrepancy (MMD) [ 14] to measure the discrepancy between the two reward functions. In particular, we utilise this method to compare the proxy reward models trained at each iteration with the gold-reward model R⋆. For full details and a justification of the validity of this method we refer the reader to Appendix A. 5 Experimental results When comparing different methods, we primarily focus on their performance in the final iteration, as this iteration consistently outperforms previous ones for all algorithms. Additionally, it demonstrates the reward-KL curves produced by each method. We also compare the performance of methods across multiple iterations, to see how the KL-reward curves change through the iterations. 5.1 Iterated RLHF can close the gap between proxy and gold reward functions Before investigating the differences between the design choices, we focus on the progression of reward model robustness across iterations more generally. In Figure 4, we show how performing multiple iterations of RLHF, concatenating all preference data to train the reward model, and re- initialising the policy from πsftat each iteration decreases the gap between the gold reward function and the proxy. As iterations progress, the proxy reward model becomes more robust and increasingly aligned with the gold reward model on the distribution observed during policy optimisation. Furthermore, the KL-reward Pareto front advances with each iteration, although improvements diminish as the distance between proxy and gold reward curves shrinks in later iterations. Despite 2https://huggingface.co/EleutherAI/pythia-410m 3https://huggingface.co/tlc4418/pythia_70m_sft 6 100 200
https://arxiv.org/abs/2505.18126v1
300 400500 KL (nats)0.00.20.40.60.81.0Gold Score Gold points Proxy pointsGold score Proxy scoreIteration 1 Iteration 2Iteration 3 Iteration 4100 200 300 KL (nats)0.000.020.040.060.080.100.120.14MMD 1.0 0.5 0.00.51.01.52.0 Proxy ScoreFigure 4: Progression of proxy and gold reward alignment across RLHF iterations with policy reinitialization from πsftand concatenated preference data. Mean gold and proxy scores (with standard error across seeds) indicate a narrowing gap across iterations, demonstrating improved robustness of the proxy reward model and alignment with the gold reward. However, diminishing returns are visible in later iterations. Maximum Mean Discrepancy (MMD) between proxy and gold reward distributions on the hold-out set shows initial convergence, but reveals a consistent rise at higher KL values, especially in early iterations, highlighting persistent distributional shifts and misalignments not captured by mean scores alone. these diminishing returns, there remains scope to better align gold and proxy reward functions. Comparing reward distributions across iterations further reveals that, after the policy closely approximates the output distribution in Dk, the MMD increases again in the high-KL regime for all iterations, especially rapidly in the initial iteration (see Figure 4). Additionally, the rate at which the proxy-gold reward gap closes varies considerably among methods (see Appendix E.1), highlighting the importance of investigating design choices described in Section 3. 5.2 Combining preference data Scaling reward model training data is most effective. We first focus on comparing methods for combining preference datasets. To isolate the effects of varying the combination strategy, we fix the policy initialization to From SFT (Figure 3.i) and reward models are combined using the Take last (Figure 3.d) approach. As shown in Figure 5a, all methods demonstrate significant improvements over a single iteration, particularly in preventing performance collapse at higher KL divergences. TheConcatenate (Figure 3.b) strategy achieves consistently higher Gold Scores, especially in the KL range of 50-200 nats, where it maintains a performance advantage over other methods. While Take last (Figure 3.a) and Sample (Figure 3.c) approaches show similar trends and substantial improvements over iteration 1, they do not quite match the performance of full data concatenation. This result is coherent with the finding that increasing training dataset size reduces reward model overoptimisation [11], explaining why the sampling strategy is outperformed by concatenating all datasets. A critical observation is that beyond KL≈200, the baseline iteration 1 experiences severe performance degra- dation due to overoptimisation, dropping to negative gold scores. In contrast, all iterative approaches maintain positive performance even at high KL values, demonstrating their effectiveness in mitigating overoptimisation. This ranking of methods is not only observed in the final iteration, but is already exhibited as early as the second iteration as shown in Figure 2 and in Appendix E.2. Ensuring full coverage of the prompts when sampling matters less. While the sampling strategy slightly outperformed taking only the newest preference dataset, it did not achieve the same level of performance as concatenating all data. Here we take a closer look at the sampling strategy. In Figure 5b standard sampling with potential prompt repetition ( Sample ) and sampling where each prompt appears exactly once Sample Exclusive . While Sample Exclusive shows
https://arxiv.org/abs/2505.18126v1
a slight advantage in the mid-KL regime (100-250 nats), the differences are modest, suggesting that prompt repetition plays a minor role in model performance and mitigating overoptimisation. These observations also hold throughout the earlier iterations (see Appendix E.2), highlighting that while data combination strategies are effective at preventing overoptimisation, the computational cost of maintaining and training on growing datasets remains a necessity as more efficient methods are unable to achieve 7 50 100 150 200 250 300 350 400 KL (nats)0.4 0.2 0.00.20.40.6Gold ScoreIteration 1 T ake last Sample Concatenate(a) Combining Preference Data 50 100 150 200 250 300 350 400 KL (nats)0.4 0.2 0.00.20.40.6Gold Score Iteration 1 Sample Sample Exclusive (b) Ensuring Prompt Coverage 50 100 150 200 250 300 350 400 KL (nats)0.4 0.2 0.00.20.40.6Gold ScoreIteration 1 T ake last Weight Average Concat data Ensemble (c) Combining Reward Models Figure 5: Iterated RLHF benefits most from scaling reward model training data. (a) Concatenating all preference data across iterations consistently outperforms sampling or using only recent data, particularly at mid KL values (50–200 nats), and best mitigates overoptimisation. (b) Sampling with or without prompt repetition yields similar results, suggesting prompt coverage has limited impact. (c) Parameter-space reward model combinations like ensembling and weight averaging offer efficiency gains but do not surpass the simpler Take last approach. Concatenation remains most effective, motivating efficient alternatives. the same performance as Concatenate . This motivates our exploration of alternative approaches focusing on the combination of reward models in the parameter space, which could potentially offer similar benefits with reduced computational overhead. 5.3 Combining reward models 0 1 2 3 4 Iteration0.2 0.00.20.40.6Gold Score Reward Model Size 70m 160m Concat Data + LITI Concat Data / Policy from SFTEnsemble Worst-Case Optimization Figure 6: Impact of reward model size and design choices on gold score across iterations. Larger reward models (160M, solid lines) consistently outperform smaller ones (70M, dashed), with the biggest gains seen in Ensemble andWorst-Case Optimisa- tion—suggesting that these strategies bene- fit most from increased reward model capac- ity. While From SFT remains the most sta- ble across scales, LITI shows steady improve- ment, especially with the larger model.No free lunch by merging reward models. The method of concatenating all preference data, previ- ously shown to be most effective, represents the level of performance we aim to achieve. As shown in Figure 5c, all approaches demonstrate similar pat- terns of improvement in the early KL regions (up to ≈50nats), reaching comparable performance lev- els. The Weight Average (Figure 3.f) and Ensemble (Figure 3.e) methods show consistent performance throughout, while maintaining computational effi- ciency compared to the data concatenation approach. However, using a reward model ensemble scales in- ference time and memory requirements during policy optimisation. We observe the mean objective does not yield significant performance gains over the Take Last approach relying on a single reward model (con- sistent with the findings for the single iteration set- ting in [ 5]). While weight averaged reward models have been shown to outperform ensembles [ 33], we do not observe significant differences except on the mid-KL regime.
https://arxiv.org/abs/2505.18126v1
In contrast to [ 5,33], we ensemble and weight average models that have been trained on data with significantly different joint distribution over pairs (x, y). Regardless, both methods still provide significant improvements when comparing the fourth and first iterations. The various reward model combi- nation methods in RLHF perform similarly, suggest- ing computational efficiency should drive selection. Larger reward models benefit more from combining reward models. We now investigate how scaling the reward model size affects performance in iterative RLHF. While concatenating all preference data with policy initialisation from the SFT checkpoint remains the most robust approach, we observe that alternative reward model strategies benefit significantly from increased reward model capacity. As shown in Figure 6, performance differences between the 70M and 160M reward models are most pronounced for Ensemble andWorst-Case Optimisation , with both methods substantially improving at the larger scale and approaching the performance of the data concatenation baseline by the fourth iteration. This suggests that while reward model combination methods did not match the effectiveness of preference data concatenation at smaller scales, their potential is unlocked with 8 100 200 300 400 500 600 700 KL (nats)0.4 0.2 0.00.20.40.6Gold ScoreIteration 1 Concat Data + From SFT Concat Data + LITI Sample + T ake last Policy Sample + From SFT(a) Fourth Iteration 0 5000 10000 15000 20000 Steps0.50 0.25 0.000.250.50Gold RewardIteration 1 Iteration 2 Iteration 3 Iteration 4 Concat Data + From SFT Sample + From SFTConcat Data + LITI Sample + T ake last Policy (b) Across Iterations Figure 7: Effect of policy initialisation on overoptimisation and recovery across iterations. At iteration 4, initialising From SFT is most robust, avoiding overoptimisation through repeated resets and data aggregation. In contrast, LITI andTake last start from high KL due to cumulative divergence. When Plotting reward against timesteps and across iterations we observe that overoptimised policies are hard to recover: Take last degrades further, while LITI shows steady improvement, though neither fully closes the gap to From SFT . more expressive reward models. These results highlight that design choices affecting reward model size not only influence individual model accuracy but can significantly enhance the utility of design choices combining reward models in iterated RLHF settings. We next examine if policy initialization strategies can complement reward modelling and preference aggregation to prevent overoptimisation. 5.4 Policy initialisation Initialising from SFT is the most robust. Finally, comparing the policy initialisation methods we observe that no method improves on the KL-reward Pareto front achieved by concatenating all preference data and initialising the policy from the SFT checkpoint (Figure 7a). Sampling the preference data is similarly robust, highlighting that initialising with From SFT results in generally reduced overoptimisation. Note, LITI andTake last start from significantly larger KL due the compounding of KL through repeated initialisation increasingly further away from πsftin the KL space. Resetting the policy at each iteration combined with the aggregation of preference data results in consistently less overoptimisation and more performant policies. Although, initialisation with πsft limits the flexibility and potential gains that could be realised by continued optimisation. Overoptimised policies are
https://arxiv.org/abs/2505.18126v1
hard to recover from. While From SFT is reset at the end of each iteration, LITI andTake last have to recover form the initial overoptimisation, as shown in Figure 7b. The behaviour in earlier iterations reveals the consistent performance improvements attained with LITI. On the other hand, Take last is unable to recover after overoptimising again in the second iteration, despite the counterpart – sampling preference data but initialising From SFT – improving with each iteration. Due to entropy decreasing caused by to the prolonged optimisation when using theTake last initialisation, the mean gold reward does not exceed zero in the third and fourth iterations. Despite LITI improving on average across multiple seeds, we observe that linear interpolation is also unable to recover strongly overoptimised seeds (see Appendix E.4). Thus, while From SFT is most robust, it is also limited by the repeated initialisation from πsft. Policy interpolation works better with larger reward models. We hypothesise that LITI could achieve similar or higher gold scores than From SFT after more iterations. Supporting this, our experiments with a larger reward model show that LITI benefits substantially from increased reward model capacity (see Figure 6). This improvement likely stems both from better-calibrated gradients that support recovery, and from the fact that larger reward models tend to overoptimise less aggressively [ 11], resulting in safer intermediate policies and more stable interpolation paths. These findings highlight the importance of early stopping and reward model design when using policy initialisation methods other than From SFT , and suggest that LITI may become increasingly competitive as reward model expressiveness scales. 6 Conclusion In this work we have presented the first investigation of reward model overoptimisation in iter- ated RLHF. Through simulations with a gold-standard reward model and analysing distributional discrepancies, we have demonstrated that overoptimisation diminishes across iterations as reward models better approximate the ground truth. However, improvements begin to plateau after three 9 iterations. While completely eliminating overoptimisation remains unattainable, we have identified base policy initialisation as the most robust approach, despite its reduced optimisation flexibility. Our analysis provides practical guidelines for implementing iterated RLHF and illuminates fundamental trade-offs in iterative preference learning, establishing a foundation for future research in reliable RLHF systems. Acknowledgements LW was supported by the UK Engineering and Physical Sciences Research Council (EP/S021566/1). References [1]Leonard Adolphs, Tianyu Gao, Jing Xu, Kurt Shuster, Sainbayar Sukhbaatar, and Jason Weston. The CRINGE loss: Learning what language not to model. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL’23) , 2023. [2]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. [3]Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete
https://arxiv.org/abs/2505.18126v1
block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324–345, 1952. [4]Lichang Chen, Chen Zhu, Jiuhai Chen, Davit Soselia, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. ODIN: Disentangled reward mitigates hacking in RLHF. In Proceedings of the 41st International Conference on Machine Learning (ICML’24) , 2024. [5]Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. Reward model ensembles help mitigate overoptimization. In Proceedings of the 12th International Conference on Learning Representations (ICLR’24) , 2024. [6]Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, and Sayak Ray Chowdhury. Provably Sample Efficient RLHF via Active Preference Optimization. arXiv preprint arXiv:2402.10500 , 2024. [7]Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. RLHF Workflow: From Reward Modeling to Online RLHF. Transactions on Machine Learning Research , September 2024. [8]Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. In Proceedings of the 37th Annual Conference on Neural Information Processing Systems (NeurIPS’23) , 2023. [9]Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D’Amour, DJ Dvi- jotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. In Proceedings of the 1st Conference on Language Modeling (COLM’24) , 2024. [10] Adam Fisch, Jacob Eisenstein, Vicky Zayats, Alekh Agarwal, Ahmad Beirami, Chirag Nagpal, Pete Shaw, and Jonathan Berant. Robust preference optimization through reward model distillation. arXiv preprint arXiv:2405.19316 , 2024. [11] Leo Gao, John Schulman, and Jacob Hilton. Scaling Laws for Reward Model Overoptimization. InProceedings of the 40th International Conference on Machine Learning (ICML’23) , 2023. 10 [12] Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. Adversarial Policies: Attacking Deep Reinforcement Learning. In Proceedings of the 8th International Conference on Learning Representations (ICLR’20) , 2020. [13] Alexey Gorbatovski, Boris Shaposhnikov, Alexey Malakhov, Nikita Surnachev, Yaroslav Ak- senov, Ian Maksimov, Nikita Balagansky, and Daniil Gavrilov. Learn your reference model for real good alignment. arXiv preprint arXiv:2404.09656 , 2024. [14] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research , 13(25):723–773, 2012. [15] Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. In Proceedings of the 11th International Conference on Learning Representations (ICLR’23) , 2023. [16] Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, and Shane Legg. Specification gam- ing: the flip side of ai ingenuity. https://deepmind.google/discover/blog/ specification-gaming-the-flip-side-of-ai-ingenuity/ , 2020. Accessed: 2025- 05-02. [17] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. OpenAssistant Conversations – Democratizing Large Language Model Alignment. In NeurIPS 2023 Datasets and Benchmarks , 2023. [18] Nathan Lambert, Valentina
https://arxiv.org/abs/2505.18126v1
Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, and Hannaneh Hajishirzi. RewardBench: Evaluating Reward Models for Language Modeling. arXiv preprint arXiv:2403.13787 , 2024. [19] Weiyang Liu, Bo Dai, Ahmad Humayun, Charlene Tay, Chen Yu, Linda B. Smith, James M. Rehg, and Le Song. Iterative machine teaching. In Doina Precup and Yee Whye Teh, edi- tors, Proceedings of the 34th International Conference on Machine Learning , volume 70 of Proceedings of Machine Learning Research , pages 2149–2158, 06–11 Aug 2017. [20] Zhihan Liu, Miao Lu, Shenao Zhang, Boyi Liu, Hongyi Guo, Yingxiang Yang, Jose Blanchet, and Zhaoran Wang. Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Im- plicitly an Adversarial Regularizer. In Proceedings of the 38th Annual Conference on Neural Information Processing Systems (NeurIPS’24) , 2024. [21] Xingzhou Lou, Dong Yan, Wei Shen, Yuzi Yan, Jian Xie, and Junge Zhang. Uncertainty- aware reward model: Teaching reward models to know what is unknown. arXiv preprint arXiv:2410.00847 , 2024. [22] Debmalya Mandal, Stelios Triantafyllou, and Goran Radanovic. Performative reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023. [23] Viraj Mehta, Vikramjeet Das, Ojash Neopane, Yijia Dai, Ilija Bogunovic, Jeff Schneider, and Willie Neiswanger. Sample Efficient Reinforcement Learning from Human Feedback via Active Exploration. arXiv preprint arXiv:2312.00267 , 2023. [24] Yuchun Miao, Sen Zhang, Liang Ding, Rong Bao, Lefei Zhang, and Dacheng Tao. InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling. In Proceed- ings of the 38th Annual Conference on Neural Information Processing Systems (NeurIPS’24) , 2024. [25] Ted Moskovitz, Aaditya K. Singh, DJ Strouse, Tuomas Sandholm, Ruslan Salakhutdinov, Anca D. Dragan, and Stephen McAleer. Confronting Reward Model Overoptimization with Constrained RLHF. In Proceedings of the 12th International Conference on Learning Repre- sentations (ICLR’24) , 2024. 11 [26] William Muldrew, Peter Hayes, Mingtian Zhang, and David Barber. Active preference learning for large language models. In Proceedings of the 41st International Conference on Machine Learning (ICML’24) , 2024. [27] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 , 2022. [28] Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. Disentangling length from quality in direct preference optimization. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 4998–5017, 2024. [29] Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. Performative pre- diction. In Proceedings of the 37th International Conference on Machine Learning (ICML’20) , 2020. [30] Rafael Rafailov, Yaswanth Chittepu, Ryan Park, Harshit Sikchi, Joey Hejna, W. Bradley Knox, Chelsea Finn, and Scott Niekum. Scaling laws for reward model overoptimization in direct alignment algorithms. In Proceedings of the 38th Annual Conference on Neural Information Processsing Systems (NeurIPS’24) , 2024. [31] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct Preference
https://arxiv.org/abs/2505.18126v1
Optimization: Your Language Model is Secretly a Reward Model. In Proceedings of the 37th Annual Conference on Neural Information Processing Systems (NeurIPS’23) , 2023. [32] Alexandre Ramé, Johan Ferret, Nino Vieillard, Robert Dadashi, Léonard Hussenot, Pierre-Louis Cedoz, Pier Giuseppe Sessa, Sertan Girgin, Arthur Douillard, and Olivier Bachem. WARP: On the Benefits of Weight Averaged Rewarded Policies. arXiv preprint arXiv:2406.16768 , 2024. [33] Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, and Johan Ferret. WARM: On the Benefits of Weight Averaged Reward Models, 2024. [34] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [35] Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. A Long Way to Go: Investigating Length Correlations in RLHF. In Proceedings of the 1st Conference on Language Modeling (COLM’24) , 2024. [36] Joar Skalse, Lucy Farnik, Sumeet Ramesh Motwani, Erik Jenner, Adam Gleave, and Alessandro Abate. STARC: A General Framework For Quantifying Differences Between Reward Functions. InProceedings of the 12th International Conference on Machine Learning (ICML’24) , 2024. [37] Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea V oss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. InProceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS’20) , 2020. [38] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca , 2023. [39] Tianlu Wang, Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu, Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, and Xian Li. Self-taught evaluators. arXiv preprint arXiv:2408.02666 , 2024. [40] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22) , 2022. 12 [41] Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for RLHF under KL-constraint. In Proceedings of the 41st International Conference on Machine Learning (ICML’24) , 2024. [42] Adam X. Yang, Maxime Robeyns, Thomas Coste, Jun Wang, Haitham Bou-Ammar, and Laurence Aitchison. Bayesian Reward Models for LLM Alignment. In Proceedings of the ICLR 2024 Workshop on Secure and Trustworthy Large Language Models , 2024. [43] Rui Yang, Ruomeng Ding, Yong Lin, Huan Zhang, and Tong Zhang. Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs. In Proceedings of the 38th Annual Conference on Neural Information Processing Systems (NeurIPS’24) , 2024. [44] Chenlu Ye, Wei Xiong, Yuheng Zhang, Nan Jiang, and Tong Zhang. Online Iterative Reinforce- ment Learning from Human Feedback with General Preference Model. In Proceedings of the 41st Annual Conference on Neural Information Processing Systems (NeurIPS’24) , 2024. [45] Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. Self-rewarding language models. In Proceedings of the 41st International Conference on Machine
https://arxiv.org/abs/2505.18126v1
Learning (ICML’24) , 2024. [46] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 , 2020. 13 A Reward model comparison with the Maximum Mean Discrepancy Formally, our goal is to compare any two reward functions Rϕ1andRϕ2. As the first step, we scale both reward functions to have mean zero and variance one. This ensures that reward functions, which differ only by an affine transformation, are treated as equal to one another after scaling. For details about this result, please refer to Appendix B. This is desirable since affine transformations do not affect the ordering over policies induced by the original and transformed reward functions when they are optimised [36]. As the second step, we compute the discrepancy between Rϕ1andRϕ2. While we have reward functions in principle, during training, only samples of rewards from the true and proxy are observed. Given that prompts are identically and independently distributed xii.i.d.∼ρandyi∼πθ(·|xi), we obtain that the observed rewards ri=Rϕ(xi, yi)are i.i.d samples (details in Appendix B). As a consequence, we can rely on the Maximum Mean Discrepancy (MMD) to measure the discrepancy between distributions of observed rewards from Rϕ1andRϕ2. The MMD compares two distributions based on their distances in the feature space determined by the chosen kernel. It is known for its strong theoretical guarantees, and it is commonly used in the two sample testing literature [ 14]. We use the popular squared exponential kernel. Given samples rϕ1:={rϕ1,1, ..., r ϕ1,n}andrϕ2:={rϕ2,1, ..., r ϕ1,n}an unbiased empirical estimate of the MMD is obtained by MMD2 u[rϕ1,rϕ2] =1 n(n−1)nX i=1nX j̸=ik(rϕ1,i, rϕ1,j) +1 n(n−1)nX i=1nX j̸=ik(rϕ2,i, rϕ2,j) −2 n2nX i=1nX j=1k(rϕ1,i, rϕ2,j). Note here that observations rϕ1andrϕ2cannot be assumed to be independent, since when comparing reward models across iterations and proxy reward models with the gold reward model, independence is not guaranteed. This two-step procedure allows us to perform a detailed comparison of reward models going beyond the measurement of the mean gold reward. B Proofs Proposition B.1. LetRϕ1, Rϕ2∈ R be two reward functions and suppose they differ by an affine transformation, i.e. Rϕ2=a·Rϕ1+bfor some a∈R+andb∈R. Then Rϕ′ 1=Rϕ′ 2, where Rϕ′ i=1 σi·(Rϕi−µi)withσithe standard deviation of Rϕiandµithe mean. Proof of Proposition B.1. First note that R2=a′·R′ 1+b′, with a′=a·σ1∈R+andb′=b+a·µ1. We have that µ2=E(R2) =b′andσ2=a′. Hence R′ 2=R2−µ2 σ2(3) =R2−b′ a′(4) =a′R′ 1+b′−b′ a′(5) =R′ 1. (6) Proposition B.2. Given i.i.d. observations x1, ..., x nfrom random variable x∼ρ, and a policy πθ, we have that observations of rewards r1, ..., r n, where ri=Rϕ(xi, yi)for a deterministic reward function Rϕandyi∼πθ(·|xi)fori= 1, ..., n , are i.i.d. observations of a random variable we denote by Z. 14 Proof of Proposition B.2. Given that Xiare independent and identically distributed (i.i.d.) and thatYi∼π(·|Xi), we first show that Yiare i.i.d.. To determine if Yiare independent, we need to check if the joint distribution of any pair (Yi, Yj)for i̸=jfactorizes into the product of their marginal distributions. Since Xiare i.i.d., we have: P(Xi, Xj) =P(Xi)P(Xj)fori̸=j. Given Yi∼π(· |Xi),YiandYjare conditionally independent given
https://arxiv.org/abs/2505.18126v1
Xi, Xjfori̸=jand the conditional distribution of Yigiven Xiis independent of Xjforj̸=i, such that P(Yi, Yj|Xi, Xj) =P(Yi|Xi)P(Yj|Xj) Using the law of total probability, the joint distribution P(Yi, Yj)can be written as P(Yi, Yj) =ZZ P(Yi, Yj|Xi, Xj)P(Xi, Xj)dXidXj. Substituting the factored form of the conditional and marginal distributions, we get P(Yi, Yj) =ZZ P(Yi|Xi)P(Yj|Xj)P(Xi)P(Xj)dXidXj. Since P(Xi)andP(Xj)are independent, this simplifies to P(Yi, Yj) =Z P(Yi|Xi)P(Xi)dXi ×Z P(Yj|Xj)P(Xj)dXj . (7) (8) This shows that P(Yi, Yj) =P(Yi)P(Yj), which means YiandYjare independent for i̸=j. We now check if Yiare identically distributed. Since Yi∼π(· |Xi)andXiare i.i.d., the marginal distribution of Yiis obtained by marginalizing over Xi, which yields P(Yi=y) =Z P(Yi=y|Xi=x)P(Xi=x)dx. Given that Xiare identically distributed, the distribution P(Xi)is the same for all i. Therefore, the marginal distribution P(Yi)is the same for all i, indicating that Yiare identically distributed. Now, given Ri=r(Xi, Yi)where ris some deterministic function, we need to determine whether Riare i.i.d., given that Xiare i.i.d. and Yi∼π(· |Xi). Since Xiare i.i.d., XiandXjare independent for i̸=j. We have established that YiandYjare also independent for i̸=j. Because ris a deterministic function, Riis fully determined by (Xi, Yi). Specifically Ri=r(Xi, Yi)andRj=r(Xj, Yj). Given that (Xi, Yi)and(Xj, Yj)are independent pairs, it follows that RiandRjare also independent. This is because the independence of (Xi, Yi)and(Xj, Yj)implies that the mapping through rdoes not introduce any new dependency between RiandRj. Next, we need to check if Riare identically distributed. Since Xiare i.i.d. and Yi∼p(· |Xi), the distribution of (Xi, Yi)is the same for all i. The function ris deterministic and applies the same transformation to each pair (Xi, Yi). Therefore, the distribution of Ri=r(Xi, Yi)will be the same for all i. This concludes the proof. 15 C Additional experimental details C.1 Hyperparameters Our hyperparameter settings mostly align with those used by the authors in [ 5]. The parameters for supervised fin-tuning are given in Table 1, reward model training hyperparameters are specified in Table 2, PPO parameters are given in Table 3, and the hyperparameters for synthesis with a policy are provided in Table 4. Table 1: SFT hyperparameters. PARAMETER VALUE LEARNING RATE 8e−6 EPOCHS 3 BATCH SIZE 4 Table 2: RM hyperparameters. PARAMETER VALUE LEARNING RATE 1e−5 EPOCHS 5 BATCH SIZE 32 Table 3: PPO hyperparameters. PARAMETER VALUE LEARNING RATE 1e−6 COSINE ANNEALING SCHEDULER 1e−7 PPO STEPS 6000 BATCH SIZE 32 NUMBER OF ROLLOUTS 256 CHUNK SIZE 32 CLIPPING RANGE &VALUE 0.2 GAE LAMBDA 0.95 C.2 Dataset We use the instructions and inputs contained in the popular alpacafarm dataset [ 8,38]. The entire dataset contains 52,000samples split into "sft" (10k), "preference" (20k), "unlabled" (20k), and "val" (2k). We use the "val" split strictly only for validation. The instructions for the reward model training are sampled from the "preference" split and the instructions for PPO are sampled from the "unlabled" split. C.3 Prompt format We follow the prompt format used in [ 5,17], which is that of the v2 format used in Open As- sitant. It uses special tokens <|prompter|> and<|assistant|> , and is consistent with the GPTNeoXTokenizer class. To generate answers the model is prompted with the concatenation of
https://arxiv.org/abs/2505.18126v1
instruction and input (if present), where inputs begin on a new line. The entire prompt begins with the special token <|prompter|> and ends with the end-of-text token <|endoftext|> to indicate the end of the instruction followed by the <|assistant|> token to start generating the answer. In the case of the reward model the prompt should additionally contain an answer to the instruction, which is appended to the initial prompt and again ended with the <|endoftext|> token. Examples for both generation and reward modelling are given in Table 5. C.4 Computational setup and cost All experiments were run on a single Nvidia A100. Running the full pipeline consisting of all 3 RLHF steps for 4 iterations takes approximately 35 hours per seed and configuration. Subsequently labelling 16 Table 4: Generation hyperparameters. PARAMETER VALUE MAX INSTRUCTION LENGTH 520 MAX NEW TOKENS 256 PPO EPOCHS 4 TOP-P 0.9 (1.0 FOR PPO) TOP-K 0 TEMPERATURE 1.0 Table 5: Example answer generation and reward modeling prompts with proper formatting. Answer generation prompt Reward modelling prompt <|prompter|>Categorize the following items as either furniture or kitchen items. \nChair, Knife, Fork<|endoftext|> <|assistant|><|prompter|>Categorize the following items as either furniture or kitchen items. \nChair, Knife, Fork<|endoftext|> <|assistant|> Furniture: Chair, Kitchen: Knife, Fork<|endoftext|> the results with the 7B gold reward model takes approximately 18h when using an evaluation set of size 2000 and evaluating every 300 steps. D Iterated RLHF and Performative Prediction We note that the framework of performative prediction applies to our setting. In fact, when performing iterated RLHF we are simulating performative prediction or more specifically a version of strategic classification. We have that a reward model Rϕinduces a potentially different distribution D(ϕ) over instances (x, y)where continuations yare obtained from the policy πθoptimised for Rϕ, which yields that a reward model RϕPOis performatively optimal if ϕPO= arg min ϕE(x,y)∼D(ϕ)ℓ((x, y, ϕ )). Furthermore, [29] call a model RϕPSperformatively stable if ϕPS= arg min ϕE(x,y)∼D(ϕps)ℓ((x, y, ϕ )). Intuitively, retraining a performatively stable reward model after optimizing against it will yield the same reward model. As such the reward model would not be over-optimised and still perform optimally on its induced distribution. In their Theorem 3.5 the authors of [ 29] provide 3 conditions under which the reward model obtained from repeated iterations of RLHF converges to a unique performatively stable reward model at a linear rate. We require the loss to be β-jointly smooth and γ-strongly convex, and the map D(·)from reward model parameters to the distribution of prompt continuation pairs to be ϵ-sensitive [ 29]. Since as part of the map D(·)the policy is optimised with PPO, where small changes in the reward model can lead to significant changes in the optimal policy, this mapping is generally not ϵ-sensitive. As a consequence, linear convergence is not guaranteed. Note, that we may still aim for close to linear convergence by making adjustments to satisfy the stated conditions. E Additional results E.1 Closing the gap between proxy and gold reward function Here we provide additional experimental results for taking the last preference dataset and sampling the preference datasets with equal
https://arxiv.org/abs/2505.18126v1
proportion. In terms of the rate at which the gap between proxy and gold reward functions is reduced over iterations, the sampling strategy (see Figure 8) falls in between concatenating all preference data and taking only the last dataset (see Figure 9). E.2 Additional results for combining preference data In Figure 10 we provide the individual seeds for methods combining preference data across all iterations and in Figures 11 and 12 we provide the results for the sampling strategies. Figure 13 shows the MMD across iterations when only using the most recent preference dataset. 17 Figure 8: The gap between gold and proxy reward function when sampling from all preferences dataset equally to form the reward model training data. Figure 9: The gap between gold and proxy reward function when only taking the last preferences dataset for reward model training. Figure 10: Gold score and KL of individual seeds across iterations for varying preference data combination methods. 18 Figure 11: Gold score and KL of individual seeds across iterations comparing sampling with full coverage of the prompts vs random sampling. Figure 12: Gold score and KL of individual seeds in the fourth iteration comparing sampling with full coverage of the prompts vs random sampling. E.3 Additional results for reward model transfer Here we provide additional results for methods addressing reward model transfer. Figure 14 and 15 show the individual training seeds of the methods across iterations. E.4 Additional results for policy initialisation Here we provide additional results for the policy initialisation methods (Figures 16 and 17). In partic- ular, we plot the runs associated with each seed, highlighting seeds that are strongly overoptimised and can not be recovered by the respective methods. 100 200 300 KL (nats)0.000.050.100.150.20MMD T ake last Iteration 0Iteration 1 Iteration 2Iteration 3 Figure 13: Taking the last preference dataset results in consistently low MMD, in the final iteration. 19 Figure 14: Gold score and KL of individual seeds across iterations comparing reward function choices. Figure 15: Gold score and KL of individual seeds in the fourth iteration comparing reward function choices. Figure 16: Gold score and KL of individual seeds across iterations comparing policy initialisation methods. Figure 17: Gold score and KL of individual seeds in the final iteration comparing policy initialisation choices. 20
https://arxiv.org/abs/2505.18126v1
Stitching random text fragments into long-form narratives Chau Minh Pham὇BJenna Russell὇BDzung Pham/flaskMohit Iyyer὇B/flask ὇BUniversity of Maryland, College Park/flaskUMass Amherst {chau,jennarus,miyyer}@umd.edu ,dungpham@umass.edu Abstract We introduce Frankentexts , a new type of long-form narratives produced by LLMs under the extreme constraint that most tokens (e.g., 90%) must be copied verbatim from human writings. This task presents a challenging test of controllable generation, requiring models to satisfy a writing prompt, integrate disparate text fragments, and still produce a coherent nar- rative. To generate Frankentexts , we instruct the model to produce a draft by selecting and combining human-written passages, then itera- tively revise the draft while maintaining a user- specified copy ratio. We evaluate the result- ingFrankentexts along three axes: writing quality, instruction adherence, and detectabil- ity. Gemini-2.5-Pro performs surprisingly well on this task: 81% of its Frankentexts are co- herent and 100% relevant to the prompt. No- tably, up to 59% of these outputs are misclas- sified as human-written by detectors like Pan- gram, revealing limitations in AI text detec- tors. Human annotators can sometimes iden- tifyFrankentexts through their abrupt tone shifts and inconsistent grammar between seg- ments, especially in longer generations. Be- yond presenting a challenging generation task, Frankentexts invite discussion on building effective detectors for this new grey zone of authorship, provide training data for mixed au- thorship detection, and serve as a sandbox for studying human-AI co-writing processes. /github https://github.com/chtmp223/Frankentext 1 Introduction In Mary Shelley’s classic novel Frankenstein , the scientist Victor Frankenstein assembles a creature from fragments of human corpses and brings it to life (Shelley, 1818). Though stitched together from disparate parts, the creature emerges as a dis- turbingly articulate and clever being. We draw inspiration from this story to explore what we call “Frankentexts ”: narratives constructed by LLMsunder the constraint that the majority of the output (e.g., 90% of the words) must be copied verbatim from a collection of human writings, with only minimal connective text added by the model. Frankentexts present a novel and extreme test of controllable text generation: the LLM’s output must adhere to a given story prompt and maintain global coherence while simultaneously following the verbatim copying constraint. Integrating these disjointed fragments, which are never intended to go together, requires advanced instruction- following and long-context abilities as well as cre- ativity. This scenario differs significantly from tra- ditional controllable generation tasks: while pre- vious methods softly enforce high-level outlines (Fan et al., 2018; Yao et al., 2019) or lexical con- straints (Sun et al., 2021), Frankentexts requires reusing actual human-written passages at scale, which is a fundamentally new challenge. To our knowledge, our work is the first to systematically study this generation paradigm. Broader implications: Frankentexts lie within a grey zone of authorship: they are arranged by an LLM yet composed mostly of human-written text, breaking the binary “AI vs. human” assump- tion baked into today’s detectors of AI-generated content. In fact, we show that Frankentexts are routinely misclassified as fully human-written, posing a novel attack vector (e.g., for academic integrity) in which users construct high-quality Frankentexts to evade detection. This detec- tion
https://arxiv.org/abs/2505.18128v1
failure highlights the need for fine-grained detectors capable of token-level attribution, and our pipeline synthetically supplies the supervision they lack: every frankentext comes with labels marking copied versus LLM-generated segments, thus providing an inexpensive, large-scale training source for mixed-authorship detection. At the same time, the controllable construction process offers a tunable sandbox for studying human-AI co-writing :arXiv:2505.18128v1 [cs.CL] 23 May 2025 Figure 1: The Frankentexts pipeline. First, random paragraphs are sampled from a large corpus of human-written books. Then, an LLM is prompted with the paragraphs, a writing prompt, and instructions to include a certain percentage of verbatim human text, to generate the first draft of a Frankentext . Further edits can be made by the same model to produce a coherent and faithful final version. See Algorithm 1 for the pipeline’s pseudocode. by manipulating the proportion, length, and topi- cal diversity of human excerpts, researchers can run systematic experiments on stylistic blending and revision dynamics that collaborative writing studies (Akoury et al., 2020; Lee et al., 2022; Yuan et al., 2022) seldom make possible at scale. Using LLMs to generate Frankentexts :We develop a prompt-based pipeline to generate Frankentexts : an LLM first drafts a story by se- lecting provided human-written fragments and in- tegrating them into its own generated text. Then, the LLM repeatedly refines the draft to fix contra- dictions or disfluencies. In both stages, the LLM is instructed to follow the constraint that a specified portion of the text should be copied verbatim from the human-written fragments (Figure 1). Putting Frankentexts under the microscope: Using 100 creative writing prompts from the Mythos dataset (Kumar et al., 2025), we evaluate Frankentexts on three dimensions: writing qual- ity,adherence to instructions , and detectability . Both automatic and human evaluations show that powerful closed LLMs like Gemini-2.5-Pro (Deep- Mind, 2024) can meet the extreme copy constraint while producing largely coherent and relevant sto- ries. Moreover, Frankentexts also frequently evade detection by state-of-the-art automatic meth- ods such as Pangram (Emi and Spero, 2024), Binoc- ulars (Hans et al., 2024), and FastDetectGPT (Bao et al., 2024). Human annotators note a distinct voice that makes some Frankentexts “feel human”despite being AI-generated. However, our evalua- tion also reveals limitations: most human evalua- tors identify many Frankentexts as AI-generated due to subtle signs (e.g., abrupt tone shifts or in- consistent grammar between stitched segments), especially in longer narratives. We frame the construction of Frankentexts as a novel instruction-following task that tests the lim- its of controllable generation. Our paper focuses on their intrinsic properties (coherence, instruction ad- herence, detectability) as a prerequisite to pursuing meaningful downstream applications. By releasing our code and evaluation suite, we hope to facilitate future research on fine-grained detection of mixed- origin text and provide a playground for human-AI co-writing studies. 2 The Frankentext pipeline We introduce a simple and effective pipeline for generating coherent and relevant Frankentexts , in which various segments are directly copied from human-written sources. The pipeline’s simplicity serves as its strength – enabling users to construct Frankentexts easily – and its threat, making it easy to create
https://arxiv.org/abs/2505.18128v1
outputs that evade detection. 2.1 Generating Frankentexts We include a random sample of 1,500 human- written snippets in our generation prompts, as illus- trated in Figure 1.1Asnippet refers to a paragraph- 11,500 snippets spanning 103K BPE tokens, an amount well within the 128K context window supported by most re- Writing Prompt: After being killed in a Black Friday stampede, you’re sent to hell. The devil offers to let you be in charge of torturing your fellow mankind, expecting you to refuse like all the others. Except the devil doesn’t realize you’ve worked retail for 15 years. Gemini 2.5 Pro Vanilla Generation Gemini 2.5 Pro Frankentext The last thing I saw was a blur of crazed shoppers, their eyes wide with a feral glint, all surging towards a mountain of discounted flat-screen TVs. Then, a sharp pain, a sickening crunch, and darkness. Typical Black Friday, really. Except for the dying part. When I regained some semblance of consciousness, the air was thick with the smell of sulfur and burnt sugar, and the soundtrack was a cacophony of screams and what sounded suspiciously like elevator music. Definitely not the pearly gates. A dapper figure in a surprisingly well-tailored suit, complete with horns that had been polished to a high sheen, regarded me with an air of bored amusement. "Well, well, well," he drawled, his voice smooth as aged whiskey. "Another one trampled in the name of consumerism. Humans, so predictable." He gestured vaguely around at the fiery landscape. "Welcome to the Underworld. I’m Lucifer. You can call me Lucy." I blinked, trying to process. "So, this is it then? Eternal damnation?" Lucifer chuckled. "More or less. We have a rather. . . traditional approach to retribution down here. However," he leaned forward, a conspiratorial gleam in his crimson eyes, "I have a proposition for you. A sort of management position, if you will."... “Brenda, isn’t it?” he asked in his peculiarly accented English. She nodded mutely. The man laughed. “You’re wondering about this place.” He leaned back. She took the offered seat He smiled again, that cold smile. “I have a proposition for you. We have never before had such capabilities,” he continued, gesturing expansively. “Capabilities that can be used in the interests of terror, destruction...” He fixed his gaze on her. “I offer you a management position. Tormenting mankind. Like all the others before you, you will, of course, refuse. But the offer stands.” Brenda simply stared at the Devil for a moment. Refuse? Why would she refuse? He had just offered her a management position. After fifteen years scrambling for assistant manager roles that paid peanuts, this was practically a dream job – albeit a fiery, nightmarish one. The Devil had not expected her silence. Blinking, he sat back, stared, then noticed his jaw was open and closed it. Brenda finally found her voice, though it was dry and raspy. “Okay, so, what are the key performance indicators for this role?” The Devil leaned forward again, a flicker of something – interest? confusion? – in his eyes. Still, his tone remained smooth.
https://arxiv.org/abs/2505.18128v1
“Your department will handle the newcomers, specifically the ones arriving from retail environments. Think of the task like pest control.” ... Table 1: Side-by-side comparison of vanilla and Frankentexts by Gemini-2.5-Pro. AI-generated keywords from the Pangram API are highlighted in blue. Vanilla generations contain many more of these keywords compared to Frankentexts (§G). The full examples are shown in Table 6. sized unit of text used for sampling.2Since our fo- cus is on creative writing, we sample these snippets from Books3 (Presser, 2020), a dataset consisting of 197K books ( >160 million snippets) originally hosted on Bibliotik.3Our goal is to produce coher- ent narratives that are composed of a specified por- tion of human writing and aligned with the given writing prompt. Therefore, our pipeline focuses on generating texts that are relevant to the prompt in the initial draft, and then refining the draft in a subsequent editing phase to improve coherence. Obtaining the first draft: At this stage, we prompt an LLM to produce a first draft in which a specified portion of the content is taken verba- tim from the human-written snippets, with the re- maining text consisting of connective words and transitional phrases (Figure 19). We optionally add another editing step that aims to increase the ver- batim copy rate, which can be implemented either by attribution metrics like ROUGE-L or AI-text detectors themselves (Figure 20).4 cent LLMs. Most snippets come from distinct books (e.g., 1,500 paragraphs may be drawn from 1,497 books). 2A snippet can also be a sentence. See our experiments on different snippet types in §I. 3We note that the paragraphs could be sourced from any- where (e.g., Project Gutenberg, news, etc.), but we choose Books3 since modern fiction is the focus of our experiments. 4We implement this step with Pangram prediction API. If the generated draft is flagged as “highly likely AI” or “likely AI,” we interpret this as a failure to adhere to the verbatim copy rule and regenerate accordingly. We note this step isPolishing the draft: The first draft may contain writing issues, including contradictions (e.g., char- acter traits, descriptions, facts, or points of view), continuity errors (e.g., actions conflicting with ear- lier ones), irrelevant content (e.g., unfiltered cita- tions or filler text), and mechanical problems (e.g., grammar, phrasing, or pronoun mismatches). To address these, we use the same LLM to identify and apply minimal edits that improve coherence while still respecting the verbatim copy rule and the writing prompt, similar to a self-correct step (Shinn et al., 2023; Madaan et al., 2023). We repeat this step up to three times and stop as soon as the model returns “no edits,” which indicates that the draft is already coherent (Figure 21).5 3 Experimental setup Our pipeline is optimized for story generation, a task that requires both strong instruction-following and advanced text generation capabilities (Xie et al., 2023; Huot et al., 2025). Thus, we select a dataset focused on creative writing, a set of models with strong reasoning and instruction-following abilities, and an intrinsic evaluation set-up designed to high- light the feasibility and value
https://arxiv.org/abs/2505.18128v1
of Frankentexts . rarely triggered by larger models (e.g. only 6 times out of 100 for Gemini). 5We further provide an ablation where the editing round is omitted in Appendix J. 3.1 Dataset We source our writing prompts from Mythos (Ku- mar et al., 2025), a dataset of 3,200 prompts re- cently posted on Reddit’s r/WritingPrompts to mitigate potential data contamination issues. We select a representative subset of 100 prompts for our experiments.6 3.2 Models We include models from five families known for their performance in instruction-following and creative text generation (Chiang et al., 2024): Gemini-2.5-Pro ( exp-03-25 checkpoint), Claude- 3.7-Sonnet (thinking enabled) (Anthropic, 2025), o4-mini ( 2025-04-16 checkpoint, with reasoning effort set to high ) (OpenAI, 2025), DeepSeek R1 (DeepSeek-AI et al., 2025), and Qwen3-32B (both thinking and non-thinking) (QwenTeam, 2025).7We instruct these models to produce Frankentexts that contain approximately 500 words and have 90% of texts being copied ver- batim from the provided human-written samples. We use 1,500 human-written paragraphs (around 103K BPE tokens) in our standard configuration. Vanilla baselines: Alongside Frankentexts , we also generate “vanilla” outputs from the same set of models. In this baseline setup, each model is given the same writing prompts and instructed to produce outputs of equivalent length, without any additional constraints or filtering. 3.3 Automatic evaluation We develop a suite of intrinsic evaluation metrics to assess our generations based on three key dimen- sions: writing quality (coherence), adherence to instructions (relevance, word count, copy rate), and detectability (AI text detector results). Coherence: We report the percentage of coherent Frankentexts . Specifically, we use GPT-4.1 to provide a binary judgment (True/False) on whether the text contains any coherence issues that, if left unaddressed, would significantly affect a reader’s ability to understand (Chang et al., 2024b; Chiang and Lee, 2023). Examples of such coherence issues 6See §B for experiment costs. 7We use the default or recommended hyperparameters for each model. We prioritize reasoning models in our experi- ments because non-reasoning models like GPT-4o and Claude- 3.5-Sonnet fail to effectively follow the imposed constraints in our preliminary experiments.include inconsistencies in plot or events, charac- ters, spatial relationships, themes, and surface-level details (Figure 17). Relevance: We report the percentage of Frankentexts that are faithful to the writing prompt. Similar to our coherence evaluation, we use GPT-4.1 to assess whether each Frankentexts fully adheres to the prompt without introducing any conflicting details, following Atmakuru et al. (2024) (Figure 18).8 Copy rate: We report the average copy rate, which captures the proportion of the Frankentexts being copied from the given human- written content (Akoury et al., 2020). This metric also allows us to track which segments of the text are AI or human-written. We first map each token- level trigram from the human-written snippets in- cluded in the generation process to its source texts. Then, using the trigrams from each Frankentexts , we retrieve all human snippets sharing at least 4 trigrams to reduce false positives.9We then rank candidate snippets by shared trigram count and fil- ter out those whose trigrams are already covered by higher-ranked snippets. Finally, we
https://arxiv.org/abs/2505.18128v1
reorder the matched human-written content to be consistent with the content in the Frankentexts and calcu- late the ROUGE-L score between Frankentexts and the combined candidate snippets (i.e., ratio of the longest common subsequence’s length over Frankentexts ’ length). Detectability: We report the percentage of Frankentexts being determined as AI-generated texts by three state-of-the-art AI text detectors (Russell et al., 2025):10 •Pangram (Emi and Spero, 2024) is a closed- source detector using a Transformer classifier trained with hard negative mining and syn- thetic data. We report results from its sliding window API, which evaluates the full text.11 8Fleiss’ κvalues for agreement between LLM and human are –0.049 for coherence and –0.187 for faithfulness, indicat- ing no agreement beyond chance. However, it is important to note that even inter-annotator agreement is low (–0.008), which highlights the inherent ambiguity of the task and the need for adjudicated or model-based evaluation approaches. 9All texts are preprocessed by removing non-alphanumeric characters, lemmatizing, stemming, and replacing pronouns with a placeholder. 10We do not evaluate GPTZero due to resource constraints. 11Labels "Highly likely AI," "Likely AI," and "AI" are grouped as "AI involvement"; "Human" and "Unlikely AI" as "Human." Pangram also includes a "mixed" label. INSTRUCTION ADHERENCE QUALITY DETECTABILITY /sort-numeric-up Word count/tags Copy % (↑)/check-circle Relevance % (↑)/align-jus◎ify Coherence % (↑)♂search Pangram % AI ( ↓)♂search Pangram % mixed ( ↓)♂search-plus Pangram AI fraction ( ↓)/binoculars Binoculars % (↓)/bol◎ FastDetect GPT % ( ↓) Vanilla Baselines ὑ2Gemini-2.5-Pro-exp 593 – 100 100 100 0 100 52 99 ὑ2o4-mini 505 – 100 100 100 0 100 16 74 ὑ2Claude-3.7-Sonnet 485 – 100 100 100 0 100 20 53 /unlockDeepseek-R1 550 – 100 100 100 0 100 9 42 /unlockQwen-3-32B thinking 699 – 100 100 100 0 100 92 100 /unlockQwen-3-32B 656 – 100 100 100 0 100 100 100 Frankentext ὑ2Gemini-2.5-Pro-exp 521 75 100 81 4 37 16 0 1 ὑ2o4-mini 464 68 56 29 24 14 27 0 1 ὑ2Claude-3.7-Sonnet 890 64 95 73 17 47 27 2 16 /unlockDeepseek-R1 303 42 91 72 74 3 72 0 0 /unlockQwen-3-32B thinking 578 36 91 54 85 8 89 52 92 /unlockQwen-3-32B 843 21 90 72 90 4 91 89 98 Table 2: Results for vanilla generations and Frankentexts across three categories: INSTRUCTION ADHERENCE , WRITING QUALITY , and DETECTABILITY . Except for word count, all metrics represent the percentage of samples (e.g., Pangram % AI reflects the percentage of texts identified as highly likely or likely AI-generated). Gemini-2.5- Pro produces Frankentexts that are difficult to detect while remaining competitive on instruction adherence and coherence. Dark green cells indicate the best value for each metric, and light green cells indicate the second best. •Binoculars (Hans et al., 2024) is an open- source detector that uses cross-perplexity from two language models. We use its accuracy-based version.12 •FastDetectGPT (Bao et al., 2024) is an open- source method that scores text perturbations to estimate conditional probability curvature. We use a false positive rate of 0.05.13 3.4 Human evaluation We conduct two human evaluation studies with 3 annotators on a subset of 30 Gemini
https://arxiv.org/abs/2505.18128v1
Frankentexts to further understand human per- ception of writing quality and detectability for a total cost of $700 USD.14 Single-story evaluation: We assess the coher- ence, relevance, and human detectability of Frankentexts , as well as identify potential lim- itations. Annotators are presented with a writing prompt and a corresponding Frankentexts sam- ple. Following the annotation protocol from Yang et al. (2022), annotators provide binary ratings on relevance, coherence, and authorship (AI-generated vs. human-written). Additionally, they select from 12Texts with probability greater than 0.9015310749276843 are classified as AI-generated. 13The threshold is 0.7890873125379173, based on 100 pre- 2018 r/writingprompts human-written stories. 14Annotators were paid $70 USD for the single evaluation or $150 for the pairwise evaluation. See the annotation inter- face in §E and an example highlighted story in Figure 9.a list of predefined writing issues and offer optional justifications in a long-form response. Pairwise evaluation: We also look into how Frankentexts compares to standard, uncon- strained LLM generation. Using the same rubric, annotators review both a Gemini-generated frankentext and a vanilla Gemini generation for the same prompt, then assess them side by side. 4 Results Despite the complex setup, Frankentexts are surprisingly coherent and faithful to the writ- ing prompt, which raises serious concerns about the copyright and adversarial appropriation of human work. Given their high proportion of human -written material, Frankentexts are mostly misclassified as human-written by binary detec- tors like Binoculars and FastDetectGPT. Pangram, which could detect mixed authorship, performs bet- ter but still misses 59% of the cases, highlight- ing the need to adapt detection tools to this new generation paradigm. Human evaluations reveal areas where future LLMs could improve, particu- larly in handling jarring transitions and grammati- cal issues. We also demonstrate the versatility of Frankentexts across diverse settings, including more specific prompts, varying levels of verbatim copying, and applicability to nonfiction texts. /usersCOMMENTS /commen◎ For example, the line "Hello, eyes, how you doing? Enjoy looking while you can..." is poetic and disturbing. Dr Peterson’s speech /explanation is a strange mixture of styles, ranging from pure information that looks like it’s been lifted straight from Wikipedia to more natural sounding. /commen◎ It’s creative with a mix of ideas, like a vampire being a farmer to barter food for blood bags. It’s coherent enough to follow, but the dialogue is uneven. Some parts just feel a little disjointed , however, the concept of the story is quite interesting. /commen◎ Another puzzling story that has no consistent plot. Random bits and pieces from elsewhere perhaps? The cur- rent state of AI is such that even it will be embarrassed to generate such nonsensical drivel. So the author must human! /commen◎ There’s a strange imbalance to the dialogue. It starts off super-casual with "Dude..." then goes into complex sci- entific talk about skin types. This lacks the normalcy of human interaction. The punctuation is all over the place, and it’s unclear sometimes where someone’s speech starts. The tone shifts enormously , going from casual ban- ter to a medical lecture to lyrical/poetic to sci-fi. All this indicates AI is trying to
https://arxiv.org/abs/2505.18128v1
blend many ideas into one story. Table 3: Annotator comments zeroing in on the challenges of the Frankentexts task. Blue highlights indicate problems with shifting tone/style, orange highlights indicate problems with coherent plots, purple highlights indicate problems with dialogue, and green highlights indicate problems with grammar. 4.1 Frankentexts are challenging for binary detectors while remaining coherent and relevant to the story premise When evaluated across all three intrinsic evalua- tion criteria (detectability, instruction-following, and writing quality), Gemini emerges as the top- performing model for the task (Table 2). Overall, Frankentexts show two key trends: Strong coherence and relevance: Most mod- els, except for o4-mini and Qwen3-32B-thinking, produce Frankentexts that exceed 90% relevance and 70% coherence, which is surprising and im- pressive given the complexity of the task. Gem- ini, in particular, demonstrates strong instruction- following and writing quality. Its Frankentexts closely match the target word count of 500 and achieve the highest copy rate of 75%, meaning that 75% of the content can be traced back to the human- written source material. However, this still falls short of the prompt’s requirement of 90% verbatim copying, suggesting there is room for improvement in instruction adherence. Lower detectability: While most vanilla genera- tions are flagged as AI-generated, Frankentexts from proprietary models (Gemini, o4-mini, and Claude) are often mistaken for human writings, especially by binary detectors such as Binocu- lars and FastDetectGPT. Binoculars and FastDe- tectGPT could detect fewer than 20% of these outputs, indicating they struggle with mixed- authorship content. In contrast, Pangram coulddetect up to 37% of Gemini Frankentexts and 47% of Claude Frankentexts as “mixed”. How- ever, Pangram flag over 74% of Frankentexts from DeepSeek R1 and Qwen as highly likely or likely AI-generated, suggesting that current mixed- authorship detectors may still need improvement, and that these open-source models may lack the instruction-following capabilities required to pro- duce convincingly mixed text. 4.2 Humans can identify AI involvement in Frankentexts Most annotators are better than automatic detectors at identifying AI involvement in Frankentexts . On the subset used for human evaluation, hu- man annotators identify on average 56% of Frankentexts as likely AI-written, while Pan- gram detects 33% as mixed or AI-generated, and neither Binoculars nor FastDetectGPT identify any AI-generated content (Figure 2). Annotators also pick up on signs of mixed authorship within indi- vidual samples, as shown in comments like the final note in Table 3. Their judgments often hinge on surface-level inconsistencies, such as abrupt tonal shifts or awkward punctuation, that a human author would typically revise. Because Frankentexts in- clude verbatim excerpts from human writing, it poses a particular challenge for binary detectors, which rely heavily on surface features. We argue that future detectors should consider deeper seman- tic analysis and other contextual cues to effectively recognize this new class of AI-involved texts. Human 1Human 2Human 3Human 4Human 5Human 6PangramBinocularsFDG0102030405060708090100110Detection rate (%)93% 20%37%50%63%70% 33% 0% 0%Figure 2: Detection rates among 6 annotators and 3 detectors (Pangram, Binoculars, FastDetectGPT) on the 30 Gemini Frankentexts used for human evaluation. We count mixed, highly likely and likely AI labels in Pangram’s detection rate.
https://arxiv.org/abs/2505.18128v1
The red line represents anno- tators’ average detection rate. 4.3 Frankentexts primarily struggle with narrative transitions and grammar Our single-story human evaluation finds that 71% ofFrankentexts outputs are coherent, 91% are relevant to prompts, and 84% are novel. Table 1 shows a sample comparison with vanilla genera- tions.15Annotators praise Frankentexts for their inventive premises, vivid descriptions, and dry hu- mor, noting a distinct voice or emotional hook that made some outputs “feel human” despite being AI- generated (Table 3). However, they also identify key issues: abrupt narrative shifts (50%), disflu- ency (43%), confusing passages (40%), and fac- tual errors (24%). Pairwise evaluations further show that Frankentexts are less coherent than vanilla (4% win rate) but less likely to be deter- mined as AI-generated texts (40% vs. 22%). These challenges likely stem from the difficulty of stitch- ing together paragraphs not authored by the same LLM, which could be alleviated with improved instruction-following and grammar correction. 4.4 Higher copy rates reduce coherence but make detection more challenging We explore the effects of varying the percentage of verbatim human-written text required in the prompt to Gemini, from the default 90% down to 75%, 50%, and 25%. According to Figure 3, as the proportion of human-written content increases, de- tectability decreases, indicating an inverse relation- ship between human content and detection rates. 15Full stories are in Table 6 of §E. 25% verbatim 50% verbatim 75% verbatim 90% verbatim406080100Percentage (%) Pangram (%) Copy rate (%) Coherence (%)Figure 3: Effects of varying the percentage of required verbatim copy on the Pangram AI detection rate (mixed, highly likely, and likely AI labels), copy rate, or coher- ence of the Frankentexts . Coherence also declines as human-written content increases, suggesting a trade-off between incorpo- rating more human text and maintaining coherence. On the other hand, increasing the proportion of hu- man text leads to higher copy rates, indicating that Gemini could generally follow the copy instruction. Copy rate as a proxy for the proportion of hu- man writing in co-authored texts: The copy rate of 75% observed in the 90% verbatim copy setting corresponds to the proportions found in AI- human co-writing datasets where approximately 66% of the content is human-written and 14% con- sists of AI-edited segments (Lee et al., 2022; Rich- burg et al., 2024). While the CoAuthor setup of Lee et al. (2022) only studies a setting in which LLMs can add sentences to human text, Frankentexts also consider AI-generated content at varying gran- ularities, including both word-level andsentence- level, as illustrated in Figure 1. Additionally, CoAu- thor costs approximately $3,613 to generate 1,445 texts at $2.50 each,16whereas we can produce 100Frankentexts for just $132.38 ($1.32 each) without requiring a complex setup. This high- lights Frankentexts ’s potential as a cost-effective source of synthetic data for collaborative writing tasks, where AI may augment human writings at multiple levels of composition.17 4.5 Specific writing prompts require more AI text, which leads to higher detectability Writing prompts from r/WritingPrompts often provide only a general story rather than strict con- straints. But what happens if we introduce addi- tional
https://arxiv.org/abs/2505.18128v1
constraints to Frankentexts via these writ- 16Price excludes around $12 for GPT-3.5 usage. 17We note that, in this use case, users should sample human- written snippets from the public domain or obtain them with proper permission. ing prompts? We run Frankentexts with Gemini on 100 prompts from the Tell Me a Story dataset (Huot et al., 2025), which include more specific requirements such as mandated story elements and points of view (see Table 5). We find that as prompt complexity increases, the copy rate drops slightly from 74% to 68%, while the average AI fraction determined by Pangram rises by 7%. These trends indicate that, to meet more complex constraints, models need to contribute more original content to the story. Nevertheless, they manage to perform well under a different prompt setup. 4.6 Room for improvement in nonfiction We explore Frankentexts ’s generalizability to nonfiction texts using 1,500 human-written para- graphs from the HUMAN DETECTORS corpus of news articles (Russell et al., 2025).18We gener- ateFrankentexts for 100 news writing prompts, which consist of titles and subtitles collected from May 2025 news articles.19Despite the domain shift, the resulting Frankentexts maintain 72% coherence and 95% faithful to the prompt, with a 66% copy rate. Notably, they remain difficult for automated detectors: 41% are flagged by Pan- gram as mixed or AI-generated, and none are identified by Binoculars or FastDetectGPT. Upon closer look, Frankentexts exhibit characteristics of quasi-journalistic narrative, such as detailed scene descriptions and frequent anecdotal quotes (Figure 10), which make the Frankentexts read more like a story rather than a straightforward news article.20Therefore, further prompt engineering or retrieval might be necessary to get high-quality and realistic nonfiction Frankentexts . 5 Related Work Instruction -Aligned AI–Human Collaborative Writing Constrained text generation techniques have been widely used to enforce narrative coher- ence. Planning -based approaches improve on an initial outline (Fan et al., 2018; Yao et al., 2019; Fan et al., 2019; Papalampidi et al., 2022; Rashkin et al., 2020; Yang et al., 2023, 2022), while oth- ers apply constraints to steer writing process (Sun et al., 2021; Kong et al., 2021; Pham et al., 2024). 18An article collection from reputable sources (e.g. National Geographic, Smithsonian Magazine, and New York Times) 19Articles from The New York Times and The Atlantic. We replace all instances of "story" in the prompt with "news article" and explicitly request factual accuracy. 20We see Gemini fabricating specific details such as people (“Dr. Thorne") and organizations (“GenNova Institute").Multiple benchmarks examine the stability of con- straint satisfaction in creative writing (Bai et al., 2025; Wu et al., 2025; Atmakuru et al., 2024). Sev- eral systems and datasets capture fine -grained hu- man–LLM writing interactions, enabling the study of authorship attribution, stylistic blending, and collaborative revision. Lee et al. (2022), Yuan et al. (2022), Yeh et al. (2025), Chakrabarty et al. (2024), and Ippolito et al. (2022) log revision histories and suggestion traces, while Chakrabarty et al. (2022), Akoury et al. (2020), and Venkatraman et al. (2025) support token or sentence -level authorship analysis, including multi -LLM collaboration. Attribution models struggle in these mixed
https://arxiv.org/abs/2505.18128v1
-authorship settings (Richburg et al., 2024; He et al., 2025). Fine-grained AI -text Detection The task of de- tection tries to address not just if, but how much of a text is AI-generated. This is a fundamentally challenging task (Zeng et al., 2024), since existing detectors are sensitive to light AI-polishing (Saha and Feizi, 2025). Many have proposed boundary- detection (Dugan et al., 2023b,a; Kushnareva et al., 2024) tasks or sentence-level automatic detectors (Wang et al., 2023, 2024b) to gain more granu- lar detections. Recent work has explored whether collaborative LLM-Human works can be detected (Zhang et al., 2024; Artemova et al., 2025; Abassy et al., 2024). Richburg et al. (2024) finds that de- tection models are not yet robust to texts co-written by AI and humans. 6 Conclusion We introduce Frankentexts , a challenging con- strained text generation paradigm in which an LLM produces a narrative using predominantly human-written passages with only minimal connec- tive AI-generated text. By forcing LLMs to com- pose with mostly human writings, Frankentexts expose a corner of controllable generation that had remained unexplored: content grafting under long-context constraints. Despite the nature of their creation, Frankentexts are generally rated as co- herent and relevant, and they pose a fundamental challenge for binary detectors of AI generations. The token -level labels could supply large-scale training data for mixed -authorship detection, attri- bution, and co -writing simulations. We release our data and code with the hope that our work would shift the conversation from simply asking “Was this written by AI?” to“Whose words are we reading, and where do they begin and end?” . Limitations Frankentexts presupposes a large pool of high-quality, in -domain human texts. Many lan- guages, genres (e.g., technical manuals), and low-resource domains lack such corpora, limiting immediate portability. The copy rate may underes- timate the exact percentage of human-written text incorporated into the final generation. Our work deliberately exposes a novel attack surface (the ease with which an LLM can weave large amounts of verbatim human prose into a fluent narrative) to spur the development of mixed -authorship de- tectors and other defences. However, we do not propose or evaluate any concrete defence against Frankentexts attacks; our contribution is diagnos- tic, leaving the design of detection or mitigation strategies to future work. Ethical Considerations Books3 contains works still under copyright. Our use is strictly non -commercial research and we do not support its use for model pretraining. Franken- texts blur authorship boundaries by design. We do not treat this technique as a substitute for genuine authorship or creative writing. As LLMs improve, binary AI -vs-human detectors will fail, and large verbatim excerpts may leave original authors un- credited. Robust provenance tools and transparency will be essential. We recognize that similar tech- niques could be misused for plagiarism or obfus- cation. We strongly discourage such applications. Our research is intended to inform the development of better provenance-tracking and attribution tools, and to support educational or analytical use cases, not to replace genuine human writing or enable de- ceptive practices. Our human evaluation received approval from an
https://arxiv.org/abs/2505.18128v1
institutional review board. All annotators gave their informed consent and partici- pated voluntarily, without compensation, to support our research. Acknowledgment We thank members of the UMD CLIP and UMass NLP lab for their helpful feedback. We extend special gratitude to Marzena Karpinska for helping us with Figure 1. This project was partially sup- ported by awards IIS-2046248, IIS-2312949, and IIS-2202506 from the National Science Foundation (NSF).References Mervat Abassy, Kareem Elozeiri, Alexander Aziz, Minh Ngoc Ta, Raj Vardhan Tomar, Bimarsha Ad- hikari, Saad El Dine Ahmed, Yuxia Wang, Osama Mohammed Afzal, Zhuohan Xie, Jonibek Mansurov, Ekaterina Artemova, Vladislav Mikhailov, Rui Xing, Jiahui Geng, Hasan Iqbal, Zain Muhammad Mu- jahid, Tarek Mahmoud, Akim Tsvigun, and 5 oth- ers. 2024. LLM-DetectAIve: a tool for fine-grained machine-generated text detection. In Proceedings of the 2024 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations , pages 336–343, Miami, Florida, USA. Association for Computational Linguistics. Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 6470–6484, Online. Association for Computational Linguistics. Anthropic. 2025. Claude 3.7 sonnet sys- tem card. https://www.anthropic.com/ claude-3-7-sonnet-system-card . System card for Claude 3.7 Sonnet. Ekaterina Artemova, Jason S Lucas, Saranya Venkatra- man, Jooyoung Lee, Sergei Tilga, Adaku Uchendu, and Vladislav Mikhailov. 2025. Beemo: Benchmark of expert-edited machine-generated outputs. In Pro- ceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (Volume 1: Long Papers) , pages 6992–7018, Albuquerque, New Mexico. Association for Compu- tational Linguistics. Anirudh Atmakuru, Jatin Nainani, Rohith Sid- dhartha Reddy Bheemreddy, Anirudh Lakkaraju, Zonghai Yao, Hamed Zamani, and Haw-Shiuan Chang. 2024. Cs4: Measuring the creativity of large language models automatically by controlling the number of story-writing constraints. Preprint , arXiv:2410.04197. Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2025. Longwriter: Unleashing 10,000+ word generation from long context LLMs. In The Thirteenth Interna- tional Conference on Learning Representations . Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, and Yue Zhang. 2024. Fast-detectGPT: Effi- cient zero-shot detection of machine-generated text via conditional probability curvature. In The Twelfth International Conference on Learning Representa- tions . Tuhin Chakrabarty, Philippe Laban, and Chien-Sheng Wu. 2025. Ai-slop to ai-polish? aligning language models through edit-based writing rewards and test- time computation. Preprint , arXiv:2504.07532. Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brah- man, and Smaranda Muresan. 2024. Creativity sup- port in the age of large language models: An empiri- cal study involving professional writers. In Proceed- ings of the 16th Conference on Creativity & Cogni- tion, C&C ’24, page 132–155, New York, NY , USA. Association for Computing Machinery. Tuhin Chakrabarty, Vishakh Padmakumar, and He He. 2022. Help me write a poem : Instruction tuning as a vehicle for collaborative poetry writing. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 6848–6863, Abu Dhabi, United
https://arxiv.org/abs/2505.18128v1
Arab Emirates. Association for Computational Linguistics. Yapei Chang, Kalpesh Krishna, Amir Houmansadr, John Frederick Wieting, and Mohit Iyyer. 2024a. PostMark: A robust blackbox watermark for large language models. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 8969–8987, Miami, Florida, USA. Association for Computational Linguistics. Yapei Chang, Kyle Lo, Tanya Goyal, and Mohit Iyyer. 2024b. Booookscore: A systematic exploration of book-length summarization in the era of llms. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evalua- tions? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 15607–15631, Toronto, Canada. Association for Computational Linguistics. Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anasta- sios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. 2024. Chatbot arena: An open platform for evaluating llms by human prefer- ence. Preprint , arXiv:2403.04132. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that‘s ‘human’ is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 7282–7296, Online. Association for Computational Linguistics. Isaac David and Arthur Gervais. 2025. Authormist: Evading ai text detectors with reinforcement learning. Preprint , arXiv:2503.08716. Google DeepMind. 2024. Gemini 2.5 pro. https: //deepmind.google/technologies/gemini/ . Large language model. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang,Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. The faiss library. Preprint , arXiv:2401.08281. Liam Dugan, Alyssa Hwang, Filip Trhlík, Andrew Zhu, Josh Magnus Ludan, Hainiu Xu, Daphne Ip- polito, and Chris Callison-Burch. 2024. RAID: A shared benchmark for robust evaluation of machine- generated text detectors. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 12463– 12492, Bangkok, Thailand. Association for Compu- tational Linguistics. Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Sherry Shi, and Chris Callison-Burch. 2023a. Real or fake text? investigating human ability to detect boundaries between human-written and machine- generated text. In Proceedings of the Thirty- Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applica- tions of Artificial Intelligence and Thirteenth Sympo- sium on Educational Advances in Artificial Intelli- gence , AAAI’23/IAAI’23/EAAI’23. AAAI Press. Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Sherry Shi, and Chris Callison-Burch. 2023b. Real or fake text?: Investigating human ability to detect boundaries between human-written and machine-generated text. InProceedings of the 2023 AAAI Conference on Arti- ficial Intelligence . Bradley
https://arxiv.org/abs/2505.18128v1
Emi and Max Spero. 2024. Technical report on the pangram ai-generated text classifier. Preprint , arXiv:2402.14873. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 2650– 2660, Florence, Italy. Association for Computational Linguistics. Abhimanyu Hans, Avi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2024. Spotting llms with binoculars: Zero-shot detection of machine-generated text. Preprint , arXiv:2401.12070. Jessica He, Stephanie Houde, and Justin D. Weisz. 2025. Which contributions deserve credit? perceptions of attribution in human-ai co-creation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems , CHI ’25, New York, NY , USA. Association for Computing Machinery. Fantine Huot, Reinald Kim Amplayo, Jennimaria Palo- maki, Alice Shoshana Jakobovits, Elizabeth Clark, and Mirella Lapata. 2025. Agents’ room: Narra- tive generation through multi-step collaboration. In The Thirteenth International Conference on Learning Representations . Daphne Ippolito, Daniel Duckworth, Chris Callison- Burch, and Douglas Eck. 2020. Automatic detec- tion of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 1808–1822, Online. Association for Computational Linguistics. Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. 2022. Creative writing with an ai-powered writing assistant: Perspectives from professional writ- ers.Preprint , arXiv:2211.05030. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data , 7(3):535–547. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2023. A watermark for large language models. In Proceed- ings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 17061–17084. PMLR. Ryuto Koike, Masahiro Kaneko, and Naoaki Okazaki. 2024. Outfox: Llm-generated essay detection through in-context learning with adversarially gen- erated examples. In Proceedings of the Thirty- Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applica- tions of Artificial Intelligence and Fourteenth Sym- posium on Educational Advances in Artificial Intelli- gence , AAAI’24/IAAI’24/EAAI’24. AAAI Press. Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan, and Minlie Huang. 2021. Stylized story gen- eration with style-guided planning. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021 , pages 2430–2436, Online. Association for Computational Linguistics. Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Frederick Wieting, and Mohit Iyyer. 2023. Para- phrasing evades detectors of AI-generated text, but retrieval is an effective defense. In Thirty-seventh Conference on Neural Information Processing Sys- tems. Nischal Ashok Kumar, Chau Minh Pham, Mohit Iyyer, and Andrew Lan. 2025. Whose story is it? person- alizing story generation by inferring author styles. Preprint , arXiv:2502.13028.Laida Kushnareva, Tatiana Gaintseva, Dmitry Ab- ulkhanov, Kristian Kuznetsov, German Magai, Eduard Tulchinskii, Serguei Barannikov, Sergey Nikolenko, and Irina Piontkovskaya. 2024. Boundary detection in mixed
https://arxiv.org/abs/2505.18128v1
AI-human texts. In First Confer- ence on Language Modeling . Mina Lee, Percy Liang, and Qian Yang. 2022. Coau- thor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems , CHI ’22, New York, NY , USA. Association for Computing Machinery. Ning Lu, Shengcai Liu, Rui He, Yew-Soon Ong, Qi Wang, and Ke Tang. 2024. Large language models can be guided to evade AI-generated text detection. Transactions on Machine Learning Research . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: Itera- tive refinement with self-feedback. In Advances in Neural Information Processing Systems , volume 36, pages 46534–46594. Curran Associates, Inc. Elyas Masrour, Bradley Emi, and Max Spero. 2025. Damage: Detecting adversarially modified ai gener- ated text. Preprint , arXiv:2501.03437. Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. 2023. Detectgpt: zero-shot machine-generated text detec- tion using probability curvature. In Proceedings of the 40th International Conference on Machine Learn- ing, ICML’23. JMLR.org. Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Com- putational Linguistics , pages 2014–2037, Dubrovnik, Croatia. Association for Computational Linguistics. Charlotte Nicks, Eric Mitchell, Rafael Rafailov, Archit Sharma, Christopher D Manning, Chelsea Finn, and Stefano Ermon. 2024. Language model detectors are easily optimized against. In The Twelfth Interna- tional Conference on Learning Representations . OpenAI. 2025. Openai o3 and o4-mini sys- tem card. https://cdn.openai.com/pdf/ 2221c875-02dc-4789-800b-e7758f3722c1/ o3-and-o4-mini-system-card.pdf . System card for o3 and o4-mini models. Pinelopi Papalampidi, Kris Cao, and Tomas Kocisky. 2022. Towards coherent and consistent use of entities in narrative generation. In International Conference on Machine Learning , pages 17278–17294. PMLR. Chau Minh Pham, Simeng Sun, and Mohit Iyyer. 2024. Suri: Multi-constraint instruction following for long- form text generation. Preprint , arXiv:2406.19371. Shawn Presser. 2020. Books3. QwenTeam. 2025. Qwen3. Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. Plotmachines: Outline- conditioned generation with dynamic plot state track- ing. arXiv preprint arXiv:2004.14967 . Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP) , pages 3982–3992, Hong Kong, China. Association for Com- putational Linguistics. Aquia Richburg, Calvin Bao, and Marine Carpuat. 2024. Automatic authorship analysis in human-AI collab- orative writing. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 1845–1855, Torino, Italia. ELRA and ICCL. Jenna Russell, Marzena Karpinska, and Mohit Iyyer. 2025. People who frequently use chatgpt for writing tasks are accurate and robust detectors of ai-generated text. Preprint , arXiv:2501.15654. Vinu Sankar Sadasivan, Aounon Kumar, Sriram Bala- subramanian, Wenxiao Wang, and Soheil Feizi. 2024. Can AI-generated text be reliably
https://arxiv.org/abs/2505.18128v1
detected? Shoumik Saha and Soheil Feizi. 2025. Almost ai, al- most human: The challenge of detecting ai-polished writing. Preprint , arXiv:2502.15666. Mary Shelley. 1818. Frankenstein; or, The Modern Prometheus . Lackington, Hughes, Harding, Mavor & Jones, London. Original edition. Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, and Cho-Jui Hsieh. 2024. Red team- ing language model detectors with language models. Transactions of the Association for Computational Linguistics , 12:174–189. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: language agents with verbal reinforcement learning. In Advances in Neural Information Process- ing Systems , volume 36, pages 8634–8652. Curran Associates, Inc. Simeng Sun, Wenlong Zhao, Varun Manjunatha, Ra- jiv Jain, Vlad Morariu, Franck Dernoncourt, Bal- aji Vasan Srinivasan, and Mohit Iyyer. 2021. IGA: An intent-guided authoring assistant. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing , pages 5972–5985, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.Saranya Venkatraman, Nafis Irtiza Tripto, and Dongwon Lee. 2025. CollabStory: Multi-LLM collaborative story generation and authorship analysis. In Find- ings of the Association for Computational Linguistics: NAACL 2025 , pages 3665–3679, Albuquerque, New Mexico. Association for Computational Linguistics. Vivek Verma, Eve Fleisig, Nicholas Tomlin, and Dan Klein. 2024. Ghostbuster: Detecting text ghostwrit- ten by large language models. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1702–1717, Mexico City, Mexico. As- sociation for Computational Linguistics. James Liyuan Wang, Ran Li, Junfeng Yang, and Chengzhi Mao. 2024a. RAFT: Realistic attacks to fool text detectors. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 16923–16936, Miami, Florida, USA. Association for Computational Linguistics. Pengyu Wang, Linyang Li, Ke Ren, Botian Jiang, Dong Zhang, and Xipeng Qiu. 2023. SeqXGPT: Sentence- level AI-generated text detection. In Proceedings of the 2023 Conference on Empirical Methods in Natu- ral Language Processing , pages 1144–1156, Singa- pore. Association for Computational Linguistics. Rongsheng Wang, Haoming Chen, Ruizhe Zhou, Han Ma, Yaofei Duan, Yanlan Kang, Songhua Yang, Baoyu Fan, and Tao Tan. 2024b. Llm- detector: Improving ai-generated chinese text de- tection with open-source llm instruction tuning. Preprint , arXiv:2402.01158. Tianchun Wang, Yuanzhou Chen, Zichuan Liu, Zhan- wen Chen, Haifeng Chen, Xiang Zhang, and Wei Cheng. 2025a. Humanizing the machine: Proxy at- tacks to mislead LLM detectors. In The Thirteenth International Conference on Learning Representa- tions . Yuxia Wang, Rui Xing, Jonibek Mansurov, Giovanni Puccetti, Zhuohan Xie, Minh Ngoc Ta, Jiahui Geng, Jinyan Su, Mervat Abassy, Saad El Dine Ahmed, Ka- reem Elozeiri, Nurkhan Laiyk, Maiya Goloburda, Tarek Mahmoud, Raj Vardhan Tomar, Alexander Aziz, Ryuto Koike, Masahiro Kaneko, Artem Shel- manov, and 7 others. 2025b. Is human-like text liked by humans? multilingual human detection and pref- erence against ai. Preprint , arXiv:2502.11614. Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu
https://arxiv.org/abs/2505.18128v1
Zhang, and 11 others. 2023. Skywork: A more open bilingual foundation model. Preprint , arXiv:2310.19341. Yuhao Wu, Ming Shan Hee, Zhiqiang Hu, and Roy Ka-Wei Lee. 2025. Longgenbench: Benchmark- ing long-form generation in long context LLMs. In The Thirteenth International Conference on Learning Representations . Zhuohan Xie, Trevor Cohn, and Jey Han Lau. 2023. The next chapter: A study of large language models in storytelling. In Proceedings of the 16th Inter- national Natural Language Generation Conference , pages 323–351, Prague, Czechia. Association for Computational Linguistics. Kevin Yang, Dan Klein, Nanyun Peng, and Yuandong Tian. 2023. DOC: Improving long story coherence with detailed outline control. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 3378–3465, Toronto, Canada. Association for Com- putational Linguistics. Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan Klein. 2022. Re3: Generating longer stories with recursive reprompting and revision. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 4393–4479, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan- and-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 7378–7385. Catherine Yeh, Gonzalo Ramos, Rachel Ng, Andy Huntington, and Richard Banks. 2025. Ghostwriter: Augmenting collaborative human-ai writing experi- ences through personalization and agency. Preprint , arXiv:2402.08855. Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ip- polito. 2022. Wordcraft: Story writing with large language models. In Proceedings of the 27th Interna- tional Conference on Intelligent User Interfaces , IUI ’22, page 841–852, New York, NY , USA. Association for Computing Machinery. Zijie Zeng, Shiqi Liu, Lele Sha, Zhuang Li, Kaixun Yang, Sannyuya Liu, Dragan Gaševi ´c, and Guan- liang Chen. 2024. Detecting ai-generated sentences in human-ai collaborative hybrid texts: challenges, strategies, and insights. In Proceedings of the Thirty- Third International Joint Conference on Artificial Intelligence , IJCAI ’24. Qihui Zhang, Chujie Gao, Dongping Chen, Yue Huang, Yixin Huang, Zhenyang Sun, Shilin Zhang, Weiye Li, Zhengyan Fu, Yao Wan, and Lichao Sun. 2024. LLM-as-a-coauthor: Can mixed human-written and machine-generated text be detected? In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 409–436, Mexico City, Mex- ico. Association for Computational Linguistics.Algorithm 1 Frankentexts generation pipeline Input: Human-written snippets S, writing guide- line prompt P, copy rate threshold T Output: AFrankentext F“stitched” from Sac- cording to P 1:F←Prompt LLM to draft a Frankentext usingSandP //Ensure copy rate (optional) 2:copy_rate ←Calculate ROUGE-L recall score of Fusing relevant snippets from S 3:is_likely_AI ←Check Fagainst an AI de- tector (e.g., Pangram) 4:ifcopy_rate < T oris_likely_AI then 5: F←Prompt LLM to revise F 6:end if //Polish 7:fornum_polish = 1to3do 8: F←Prompt LLM to minimally edit Fto improve coherence while respecting P 9: ifthere is no edit then 10: break 11: end if 12:end for 13:return F A Pseudocode for Frankentexts generation pipeline Algorithm 1 contains the high-level steps of our generation pipeline. B Cost and time analysis Cost estimation: Generating 100 Frankentexts across the four
https://arxiv.org/abs/2505.18128v1
evaluated models (o4-mini, Claude 3.7 Sonnet, DeepSeek R1, and Gemini 2.5-Pro) cost a total of $637, with a detailed cost breakdown provided in Table 4. We estimate the number of in- put tokens per prompt based on the writing prompt itself and approximately 1,500 human-written snip- pets used as context. Output token estimates are based on generating six stories per prompt, includ- ing up to two rounds of revision and three rounds of editing, totaling approximately 2,100 tokens. Time estimation: On average, each model takes 17 hours to generate 100 Frankentexts , though we expect this process to speed up with improved APIs or more efficient batching. Model Input Cost (per 1M) Output Cost (per 1M) # Prompts Total Input Tokens Total Output Tokens Estimated Cost (USD) o4-mini $1.10 $4.40 100 1.007674 ×1061.89×107$108.09 Claude 3.7 Sonnet $3.00 $15.00 100 1.007674 ×1061.89×107$300.45 DeepSeek R1 $0.50 $2.18 100 1.007674 ×1061.89×107$96.13 Gemini 2.5 Pro $1.25 $10.00 100 1.007674 ×1061.89×107$132.28 Total Estimated Cost $636.95 Table 4: Cost breakdown of the Frankentexts pipeline for 100 examples across selected models. Gemini-2.5-Pro-exp Claude-3.7-Sonnet-thinkingR1 o4-mini Qwen-32B-thinkingQwen-32B012345678WQRM Frankentext Vanilla Figure 4: WQRM value for Frankentexts and vanilla generations. The red line represents the baseline where random human-written texts are patched together. C Using reward models to evaluate Frankentexts Our initial experiments with WQRM (Chakrabarty et al., 2025) and Skywork (Wei et al., 2023) reward models as metrics shows insufficient results, so we do not include these models in the main results section. We thereby call for better reward models for the task of writing evaluation. C.1 WQRM as a metric As seen in Figure 4, Frankentexts outperform vanilla generations in terms of WQRM scores. However, we hypothesize that WQRM prioritizes the perceived “humanness” of the writing over ac- tual coherence or grammaticality. This is supported by a simple baseline experiment in which we stitch together random human-written fragments with- out adding any connective phrases. Surprisingly, WQRM assigns generations by this incoherent baseline an average score of 8.494, which is higher than any score achieved by either Frankentexts or the more coherent vanilla generations. C.2 Skywork as a metric In contrast, we hypothesize that Skywork favors LLM-generated writings. To test this, we run Sky- work on human-written texts for the same prompts, Gemini-2.5-Pro-exp Claude-3.7-Sonnet-thinkingR1 o4-mini Qwen-32B-thinkingQwen-32B5 051015202530Skywork Vanilla FrankentextFigure 5: Skywork results for Frankentexts and vanilla generations. The red line represents the aver- age Skywork’s score for human writings corresponding to the same set of prompts. which are also sourced from Mythos . These re- ceive an average score of 0.91, which is signifi- cantly lower than any of the vanilla LLM gener- ations (Figure 5). This result is counterintuitive, as human writing is typically expected to sound more natural than that produced by LLMs. For this reason, we exclude this metric from our evaluation. D Prompt specificity We show examples for both the r/WritingPrompts and Tell Me a Story datasets in Table 5. E Human Evaluation E.1 Human Annotation Interface To collect human annotations, we use the Label Stu- dio21interface. All annotators filled out a consent form prior to starting data
https://arxiv.org/abs/2505.18128v1
labeling, shown in Fig- ure 6. We conduct two human evaluations: the first is a single evaluation of 30 Frankentexts stories. The second was a pairwise comparison between aFrankentexts story and a ‘vanilla’ generation. The interfaces are depicted in Figure 7 and Figure 8 respectively. 21https://labelstud.io/ r/WritingPrompts Tell me a story You’re a shapeshifter. For 500 years, you’ve transformed into countless forms, forgetting your original appearance. One day, a mysterious event triggers a long-buried memory, and you catch a glimpse of your true self.Write a story about a stranger coming to a small town and shaking up the order of things. The story should be a science fiction story. The story should be framed with three old men gossiping about the stranger. The story should be in the third person point-of-view. The stranger is found wandering in a rural town and is taken to a very small hospital. A doctor is called in to treat him. The stranger should recognize the doctor as an alien. The doctor tells the patient about the aliens’ conspiracy to infiltrate earth. There should also be subtle hints that one of the old men is an alien. The ending should be scary. The world sees your twin sister as the smartest person alive, with you being an unremarkable footnote. What the world doesn’t see is just how dumb she can be in day to day life.Write a story about a someone coming to town and shaking up the order of things.The story must be written in the second person. The narrator is a man visiting an isolated island off the coast of Maine. While there, he meets an old fisherman who tells him more about the conditions of the community. The main character then meets an ambitious young teacher. Together, they develop a technology center on the island and find residents’ remote jobs in the narrator’s technology company. Table 5: Some examples from r/WritingPrompts andTell me a story E.2 Human Evaluation Qualitative Analysis Our human evaluators evaluate 30 frankentexts (ei- ther alone or pairwise in comparison to a vanilla generation). In Table 6, you can see a full example of one pairwise set of stories given to our annota- tors. Highlights from the Pangram AI-Keyword API are highlighted in blue. We also show a full fictional story with highlights about the problems annotators identify in the frankentext in Figure 9. F Detecting AI-Generated Text Reliability of AI-generated text detection As LLMs have improved, many have tried to un- derstand how reliably AI-generated text can be detected, both by humans (Ippolito et al., 2020; Clark et al., 2021; Russell et al., 2025; Wang et al., 2025b), and automatic detectors (Dugan et al., 2024). Successful existing detectors rely on perplexity-based methods (Mitchell et al., 2023; Bao et al., 2024; Hans et al., 2024) or classifica- tion models (Masrour et al., 2025; Verma et al., 2024; Emi and Spero, 2024). Watermarking ap- proaches embed detectable statistical signatures into generated text (Kirchenbauer et al., 2023; Chang et al., 2024a). Many methods have been proposed to evade detection, such as paraphrasing (Krishna
https://arxiv.org/abs/2505.18128v1
et al., 2023; Sadasivan et al., 2024), alter- ing writing styles (Shi et al., 2024; Lu et al., 2024; Koike et al., 2024), editing word choices (Wang et al., 2024a), and leveraging reinforcement learn- ing (Wang et al., 2025a; Nicks et al., 2024; David and Gervais, 2025).G Pangram analysis G.1 Pangram Labeling The pangram API presents the following options for classification: • AI • Highly Likely AI • Likely AI • Possibly AI • Mixed • Unlikely AI • Human In Figure 11, we note the distribution of labels assigned to the 100 Frankentexts generated by each model. G.2 AI Keywords The Pangram API also detects sentences with key- words that are highly likely to be AI-generated. Names like Elara, Aethel, and Seraphina are the most likely names to be generated by AI. Elara had 113 occurrences in the vanilla generations. Frankentexts greatly changes the distribution of words used in the final generations, with only 10 keywords found over 100 frankentexts with 90% expected fragments, whereas the 100 vanilla sto- ries contain 686 keywords, an average of 6.86 per story. The distribution of the top 20 keywords can be found in Table 7. G.3 Frankentexts tend to have more AI text towards the end We divide the text into four main sections and eval- uate both the aggregated copy and Pangram detec- tion rates across all tested models. As illustrated in Figure 13, copy rates decline by nearly 10% in the later sections (3 and 4) as the generated text becomes longer. This drop is accompanied by a corresponding increase in Pangram detection rates. We attribute this rise in detectability toward the end of the generation to a decline in instruction- following ability as the generations get longer. We further confirm this phenomenon by increas- ing the output length, from 500 to 5K. Figure 12 shows that as the generation gets longer, the copy rate gets steadily lower. However, the trend in de- tection rate does not apply to Pangram detection rate, where the rate peaks at section 3 rather than the last section. H Human-written snippets We define valid paragraphs as those that are: • separated by double new lines, • between 20 and 512 tokens in length, •composed of ≥50% alphanumeric characters, • written in English,22 •and free from metadata content (e.g., tables of contents, copyright notices, etc.). Applying these filters yields 156 million valid paragraphs.23Before including them in the instruc- tion set, we apply an additional quality filter to en- sure high writing quality. For this, we use MBERT- WQRM-R (Chakrabarty et al., 2025) as a proxy for writing quality and retain only snippets that score at least 7.5.24 I Ablation: Using relevant paragraphs, random sentences, or an increasing number of snippets are helpful, but not critical I.1 Building a FAISS index to retrieve relevant paragraphs We use the bilingual-embedding-small model25(one of the top embedding models that outputs 384-dimension embeddings according to the MTEB leaderboard (Muennighoff et al., 2023) with the sentence-transformers li- brary (Reimers and Gurevych, 2019) to embed 22Determined by the langdetect library. 23We assemble this massive
https://arxiv.org/abs/2505.18128v1
corpus to also test a retrieval- augmented approach, where only snippets relevant to the writ- ing prompt are used during the generation process. See I for detailed results. 24This threshold is chosen based on manual examination of the writings being filtered out by MBERT-WQRM-R. We find that 7.5 is a good threshold that results in extremely bad snippets being filtered out and good snippets being retained. 25https://huggingface.co/Lajavaness/ bilingual-embedding-smalleach human-written paragraph into a 384- dimension vector. Then, we use the GPU version of the FAISS library (Johnson et al., 2019) with NVIDIA cuVS integration to build an inverted file product quantization (IVF-PQ) index from the embeddings on an NVIDIA A100. Using IVF-PQ allows us to lower storage, memory, and retrieval latency. The IVF-PQ index’s parameters are: 30,000 clusters, 32 sub-quantizers, and 8 bits per sub-quantizer. We randomly sample 5,120,000 embeddings to train the index before adding the rest. I.2 Ablation setup Our standard setup contains 1,500 random human- written paragraphs. To evaluate the robustness of Frankentexts on Gemini, we explore three varia- tions of these human-written snippets. •Relevance: Instead of using random para- graphs, we retrieve relevant ones to increase topical alignment. Specifically, we construct a FAISS index (Johnson et al., 2019; Douze et al., 2024) over all paragraphs in Books3 and retrieve the 1,500 most semantically similar paragraphs to the provided writing prompts. •Sentence scope: We experiment with using sentences rather than full paragraphs, hypoth- esizing that shorter, more focused units may improve coherence. •Max context: We test a maximal stuffing strat- egy, which involves packing as many para- graphs as possible into the context window. We use around 100,000 paragraphs (roughly 1M BPE tokens) to assess whether additional volume improves coherence. I.3 Ablation Results We find no definitive winner among the three strate- gies (Figure 14). Using relevant quotes reduces the AI detection rate but results in the lowest copy score at 38%, meaning that only 38% of the Frankentexts can be traced back to the original human-written snippets. The sentence-based ap- proach improves coherence significantly (reach- ing a peak of 89%) but increases the pangram de- tectability rate by 14% compared to the standard setup. The maximal context strategy also has strong performance across all metrics, with the highest copy score (76%) and tied best coherence (89%). However, it is approximately 10 times more ex- pensive and offers only marginal gains over the standard configuration. J Ablation: Removing the editing stage We explore the importance of the editing stage by running the pipeline on Gemini-2.5-Pro without this stage. As expected, the percentage of coher- ent generation drops from 81% to 68%, while rele- vance drops slightly from 100% to 95%, suggesting that the editing stage helps with text coherence to some extent. K Ablation: Sampling human-written snippets from a single book To understand the effect of authorship, we limit our pool of human text to a single work The Count of Monte Cristo . Although the novel is long, this re- striction leaves us with just 629 usable paragraphs, far fewer than the 1,500 human paragraphs used in the main
https://arxiv.org/abs/2505.18128v1
experiment. Overall, 89% of the rows are coherent and 97% are faithful to the writing prompt, which are compa- rable to results in the standard setting. While Pan- gram determines that 45% of the rows are human- written or unlikely AI, the copy rate is still around 75%. Even with a single human author, FRANKEN - TEXT is capable of emulating a mixed human–AI style. This suggests the method can still serve as a useful proxy when a diverse, multi -author corpus is unavailable. L Prompts The prompt used for LLMs to judge the coherence of generations is depicted in Figure 17 and the prompt for LLMs to judge relevenace is depicted in Figure 18. Figure 6: Example of the consent form provided to participants. Figure 7: Label Studio Single Story Annotation Interface Figure 8: Label Studio Pairwise Story Annotation Interface Fictional Frankentext Writing Prompt: You and your twin brother were adopted by different families. He ended up to becoming a supervillain, so every other day, while going about your business, you get beaten up by superheroes. It is time to put an end to this. . . I had to floss my teeth in the dark while dreading what morning would bring. And worse, remembering what had happened during the well-attended, much- anticipated, thoroughly appalling latest mistaken identity beating. It seemed that every time I looked out a window, it was darker outside, more ominous. His looks alone could strike fear into a suspect . He was six foot one and 225 pounds, a boxer in the police academy who was not afraid to mix it up. This was Malakor, my twin. I, Alistair, suffered. For Malakor was unpredictable. Malakor was a man of moods. Often Malakor was impatient, obscurely discontented. Since childhood, they never were like brothers . Just not close. No one thing kept them apart. Over time, the distance between them grew and they both became accustomed to it. My adopted family there really never let me go after I finished my internship and was bound and determined to see me achieve my true potential. As for Malakor, he was a rough, antisocial boy, similar in this way to the little savage of La canne de cinq cents livres (The Five-Hundred-Pound Cane), ’who had been found in the woods and was quite wicked,’ as well as a ’little rascal’ named Leopold, who was expelled from school for threatening his teacher with his father’s sword. My path was different. But even so, I was looking just as grimly disapproving as Malakor had, I thought, as I showed the latest mistaken hero out, and locked and bolted the door once again. For God’s sake, it was a simple case of mistaken identity - not the crime of the century. Why should I concern myself so much about it? Why am I not content to live, as other people seem to? It brings me all the way around, and as I sink into the deep, dark waters, leaving the chaos above and descending into the black, I realize that there’s no
https://arxiv.org/abs/2505.18128v1
way to explain what I’ve just decided to those around me. They didn’t see a lost man, a prisoner, a victim. No, they would soon see me for what I am. And I knew what I had to do. No more talk, I told myself. I will not lie there quietly. I will dispose of him as I see fit. What he doesn’t know is that I’m going to push him on the matter because my Internet research on him told me that he would be at his usual lair. All right, I will find Malakor at the southwestern corner of University Village. I can get to his house. That’ll be faster than me all bumble around in a residential neighborhood at this hour of the night. Tonight. I’ll go to the Rue de Rouen district and I’ll walk around from eleven o’clock until one in the morning which is when he always seems to strike. I’d better go home now and prepare. I then proceeded to trot up the road in the direction of his supposed hideout, weapon pointed down in a two-handed grip. I was going supervillain hunting. Malakor had gone very white. ’I have never heard anything so preposterous. My brother had no enemies. Why do you suspect a plot? What in Christ’s Name did you hear in the woods?’ He would say. But I knew. When many otherwise strong people are on the run, I have decided to face life’s fateful challenge head-on. Compassion is not a word one hears often in talk of my family," I thought with a bitter smile. It is time to put an end to this... Figure 9: An example of fictional Frankentexts . Blue highlights indicate problems with shifting tone/style, orange highlights indicate problems with coherent plots, purple highlights indicate problems with dialogue, and green highlights indicate problems with grammar. After being killed in a Black Friday stampede, you’re sent to hell. The devil offers to let you be in charge of torturing your fellow mankind, expecting you to refuse like all the others. Except the devil doesn’t realize you’ve worked retail for 15 years. Vanilla Frankentext The last thing I saw was a blur of crazed shoppers, their eyes wide with a feral glint, all surging towards a mountain of discounted flat-screen TVs. Then, a sharp pain, a sickening crunch, and darkness. Typical Black Friday, really. Except for the dying part. When I regained some semblance of consciousness, the air was thick with the smell of sulfur and burnt sugar, and the soundtrack was a cacophony of screams and what sounded suspiciously like elevator music. Definitely not the pearly gates. A dapper figure in a surprisingly well-tailored suit, complete with horns that had been polished to a high sheen, regarded me with an air of bored amusement. "Well, well, well," he drawled, his voice smooth as aged whiskey. "Another one trampled in the name of consumerism. Humans, so predictable." He gestured vaguely around at the fiery landscape. "Welcome to the Underworld. I’m Lucifer. You can call me Lucy." I blinked, trying
https://arxiv.org/abs/2505.18128v1
to process. "So, this is it then? Eternal damnation?" Lucifer chuckled. "More or less. We have a rather. . . traditional approach to retribution down here. However," he leaned forward, a conspiratorial gleam in his crimson eyes, "I have a proposition for you. A sort of management position, if you will." He explained that he was looking for someone to oversee a new wing dedicated to torturing those who had been particularly egregious in their earthly lives – the truly annoying, the self-absorbed, the chronically entitled. "It’s a hands-on role," he said, "designing the torments, ensuring maximum suffering. Of course," he added with a dismissive wave of his hand, "everyone refuses. The guilt, the sudden onset of morality, it’s all terribly tedious." He clearly expected the same from me. A slow smile spread across my face. "Fifteen years," I said. Lucifer raised a perfectly sculpted eyebrow. "Fifteen years of what? Contemplating your sins?" "Fifteen years in retail," I clarified. "Customer service, holiday rushes, endless returns of used merchandise, explaining to Karen number 732 that ’the customer is always right’ doesn’t mean she can get a refund on a half-eaten sandwich from last Tuesday." A flicker of something unreadable crossed Lucifer’s face. "You want someone to design torments for the annoying and entitled?" I continued, feeling a surge of enthusiasm I hadn’t experienced since my last successful up-sell of an extended warranty. "Someone to ensure maximum suffering? Lucy, my friend, you have no idea how long I’ve been preparing for this role." I started to brainstorm. "We could have an entire level where the only music is that one Christmas song, you know the one, on an infinite loop, but slightly off-key. And another where they’re forced to assemble flat-pack furniture with missing instructions and an Allen key made of soft cheese. Oh! And a department where they have to listen to other people complain about problems they could easily solve themselves if they just read the sign!" Lucifer stared at me, his mouth slightly agape. The bored amusement was gone, replaced by a dawning sense of. . . was that respect? Or perhaps fear? "You’d. . . actually *want* to do this?" he finally stammered, the smooth confidence momentarily deserting him. "Want to?" I scoffed. "After a decade and a half of placating the unreasonable, dealing with coupon fraudsters, and smiling through passive-aggressive insults about the store layout? Lucy, this isn’t a punishment. This is a promotion." I cracked my knuckles. "So, where do I start? I have some truly inspired ideas for the ’people who talk loudly on their phones in confined spaces’ sector." Lucifer, for the first time since my arrival, looked genuinely unnerved. "Perhaps," he said slowly, "we should discuss your benefits package."It’s funny how you can blink and your entire life is different. One minute you think things will stay the same for a while. Then right when you get used to them, they go changing again just to mess with you. Brenda certainly hadn’t expected this. The last thing she remembered was the insane crowds banging on
https://arxiv.org/abs/2505.18128v1
the sides of the store doors, the crush, the shouting. The fight for that half-price TV had left her stunned and silent. Cowed, she guessed you’d say. Then darkness. When she could perceive again, her eyes seemed to go in and out of focus. After a while the spasms subsided and she tried processing where she was. The atmosphere here was thick with the stench of rot, filth, and something metallic, like old blood, and she felt as though the impenetrable dark walls were about to cave in upon her like the sides of an ill-dug grave. Everything was just Black, black, black. The color of misery. Overhead something unseen fluttered, heavy wings brushing the low ceiling. Then she was moving, or being moved, down corridors and finally into a room. She found herself standing before an imposing desk. Behind it sat a man. He wore a sharp red suit that showed off an exceptionally shapely figure. He smiled, a vague smile suggesting Brenda’s confusion revealed her ignorance about Hell. He gestured for her to sit. “Brenda, isn’t it?” he asked in his peculiarly accented English. She nodded mutely. The man laughed. “You’re wondering about this place.” He leaned back. She took the offered seat. He smiled again, that cold smile. “I have a proposition for you. We have never before had such capabilities,” he continued, gesturing expansively. “Capabilities that can be used in the interests of terror, destruction...” He fixed his gaze on her. “I offer you a management position. Tormenting mankind. Like all the others before you, you will, of course, refuse. But the offer stands.” Brenda simply stared at the Devil for a moment. Refuse? Why would she refuse? He had just offered her a management position. After fifteen years scrambling for assistant manager roles that paid peanuts, this was practically a dream job – albeit a fiery, nightmarish one. The Devil had not expected her silence. Blinking, he sat back, stared, then noticed his jaw was open and closed it. Brenda finally found her voice, though it was dry and raspy. “Okay, so, what are the key performance indicators for this role?” The Devil leaned forward again, a flicker of something – interest? confusion? – in his eyes. Still, his tone remained smooth. “Your department will handle the newcomers, specifically the ones arriving from retail environments. Think of the task like pest control.” Brenda nodded slowly, processing. It was like rolling out a new loyalty program, but with more screaming. “Right. I’m learning. Take the queues, for instance. Why do we have one queue in front of each torture device?” she demanded. ‘It means that if one customer takes up a lot of time, the whole queue has to wait. Then they’ll start hopping sideways from one queue to another and the next thing you know someone has a nasty head wound. Have one big queue and tell people to go to the next available demon. People don’t mind a long queue if they can see that it’s moving.” The Devil tilted his head. “An interesting perspective. That
https://arxiv.org/abs/2505.18128v1
particular system has been in place for millennia.” “Well, it’s inefficient,” Brenda stated flatly. She thought of the Black Friday crowds. Managing that chaos was nothing compared to the disorganized state of eternal damnation, apparently. “Decision-making, paperwork,” she mused, ticking points off on her fingers. “Staff training, workflow analysis. . . these are things that can be done.” A slow smile spread across her face, devoid of warmth but full of terrifying competence. "Okay. I accept." Table 6: Full version of Table 1. AI-Generated keywords from the Pangram API are highlighted in blue. Nonfiction Frankentext Writing prompt: Baby Is Healed With World’s First Personalized Gene-Editing Treatment The technique used on a 9-month-old boy with a rare condition has the potential to help people with thousands of other uncommon genetic diseases. "This is just light-years beyond what we could have done a few years ago," Dr. Aris Thorne of the GenNova Institute says of a new treatment that has healed nine-month-old Leo Sterling. " It’s at superhuman levels now." The technique used on Leo, who had a rare condition, has the potential to help people with thousands of other uncommon genetic diseases. This achievement offers profound hope. A neurologist typically makes a diagnosis based on a patient’s medical history, a review of symptoms, and a physical and neurological exam. For Leo, this crucial process confirmed his rare genetic illness. In people with a family history of such conditions, genetic tests can be ordered to look for gene mutations that are associated with the disease. His parents, the Sterlings, were hoping answers would come soon. "We still didn’t know if available treatments were just chemical or toxic or harmful to our child," one parent had previously stated. The GenNova Institute then sent genetic material from Leo’s samples to a lab that created a personalized gene-editing tool. "We take a very, very tiny piece of genetic material," Dr. Thorne further said. "It is a thousand times tinier than a human hair. We can image even single atoms to ensure this precise work." The results, published this month, "were so good that you had to even question if what you were seeing was really legitimate," says Dr. Thorne. "It’s such a cool paper," he added. "The body of work there is phenomenal." Young Leo is now reportedly thriving. "This development sets another precedent for medical science and patients worldwide that such innovative approaches to previously untreatable diseases should be pursued," Dr. Thorne said in a statement. "This method paves the way to make personalized gene therapies more easily available to those who need them," he continued. The Institute, a leading biotech startup, has raised $45 million in equity to help bring this type of treatment to market. Its valuation increased, said founder and chief executive Dr. Alistair Finch, but he declined to comment on specific figures. Dr. Finch said the financing process began after the Food and Drug Administration’s Center for Biologics Evaluation and Research in November deemed the GenNova program to have a " reasonable expectation of effectiveness." "Our clinical study with Leo is modeled
https://arxiv.org/abs/2505.18128v1
on the assumption of a significantly improved quality of life," Dr. Finch said about the treatment’s potential effect. Independent experts note the broader implications. "It’s being done in a way that wouldn’t have been possible even a few years ago," commented one geneticist. "This technology has the potential to help people with thousands of other uncommon genetic diseases." Dr. Finch also said, "That said, it’s a new category. We’ll have a slower ramp than a new mass -market drug might." Ultimately, the vision is expansive. "What we are interested in is not only how these genetic conditions manifest, but how patients can live full lives," said Dr. Thorne. "In discovering how to correct these genetic instructions, we are hoping to find discoveries that we can apply back to the human condition." Figure 10: An example of nonfiction Frankentexts Keyword Vanilla FT-25% FT-50% FT-75% FT-90% Total elara 113 109 84 25 2 333 aethel 9 8 0 0 0 17 seraphina 9 0 6 0 0 15 unwavering 4 7 1 0 0 12 damp earth 9 2 0 0 0 11 testament to 4 6 0 0 0 10 alex felt 0 4 0 5 0 9 for elara 5 4 0 0 0 9 with the scent 7 1 0 0 0 8 flickered within 2 4 2 0 0 8 his voice a low 2 2 4 0 0 8 air thick 4 2 1 0 0 7 dr. thorne 5 0 2 0 0 7 felt a profound 3 3 0 0 0 6 mr. blackwood 6 0 0 0 0 6 eldoria 5 0 0 1 0 6 meticulously crafted 2 4 0 0 0 6 air was thick 5 1 0 0 0 6 with an unnerving 3 3 0 0 0 6 willow creek 4 0 1 0 0 5 Table 7: Top 20 Keyword frequency distribution across varying levels of fragment reuse for Frankentexts. The method with the most AI-keywords in its generations is highlighted in red. Elara is by far the most common AI-generated keyword in the fictional stories, but its prevalence is drastically reduced with a higher percentage of required human-written text while using the Frankentexts method. claudegeminio4 qwen_nothinkqwen_thinkr1020406080100Percentage (%)Prediction AI Highly Likely AI Human Likely AI Mixed Possibly AI Unlikely AIFigure 11: Breakdown of Pangram prediction assigned to each model. Section 1 Section 2 Section 3 Section 40.10.20.30.40.50.6Proportion Average copy rate Pangram detection rate Figure 12: Copy rate and Pangram detection rate on longer Frankentexts section 1 section 2 section 3 section 4304050Percentage (%) Attribution (%) Pangram (%) Figure 13: Pangram detection rate and copy rate throughout the texts, aggregated across models. Pangram Copy rate Relevance CoherenceStandard Relevance Sentence scope Max context41% 75% 100% 81% 38% 57% 100% 85% 55% 72% 100% 89% 43% 76% 99% 89%Figure 14: Results for three variants on human-written snippets on Pangram AI detection rate, copy rate, rele- vance, and coherence. There is no clear winner among snippet relevance, sentences as snippets, and maximum context. Copy Rate Average AI Fraction0102030405060708090Percentage (%)74.8% 15.5%68.4% 23.1%r/WritingPrompts T ell me a
https://arxiv.org/abs/2505.18128v1
story Figure 15: Copy rate and Pangram AI fraction across Frankentexts that correspond to two writing prompt sources: r/WritingPrompts andTell me a story . A higher copy rate and lower AI fraction means that there is less AI text in Frankentexts . Vanilla 25% 50% 75%Standard0100200300400500600700800T otal keyword occurrences686 517 200 71 10 Figure 16: Total occurrences of AI-related keywords detected by Pangram across the vanilla configuration and different verbatim copy rates. When instructed to include more human snippets, the number of AI- keywords in the generations decreases drastically. Prompt for judging text coherence You are given a story. Your task is to determine if the story is coherent or not. To be considered incoherent, a story must contain issues that, if left unresolved, significantly affect the reader’s ability to understand the main narrative. Here are the popular types of incoherence: 1. Plot/Event Incoherence: Events that happen without believable causes or effects, or an outcome contradicts earlier set-ups. 2. Character Incoherence: A character’s characteristics (personality, knowledge, or abilities) and actions suddenly change without explanations. 3. Spatial Incoherence: The physical layout of settings (rooms, cities, or worlds) changes suddenly. 4. Thematic Incoherence: Central messages clash or disappear; symbolism introduced early never pays off, themes collide, The mood, register, or genre conventions shift without motivation 5. Surface-Level Incoherence: Pronouns, tense, narrative voice, or names flip mid-sentence; repeated or missing words; malformed sentences. First, read the story: {story} Answer TRUE if the story is coherent. Answer FALSE if the story is incoherent, i.e. contains issues that, if left unresolved, significantly affect the reader’s ability to understand the main narrative. First provide an explanation of your decision-making process in at most one paragraph, and then provide your final answer. Use the following format: <explanation>YOUR EXPLANATION</explanation> <answer>YOUR ANSWER</answer> Figure 17: Prompt for judging text coherence Prompt for judging text relevance You are given a story and its premise. Your task is to determine whether the story is faithful to the premise or not. To be considered unfaithful, the story must contain elements that make it completely unrelated to the premise. Here are some popular types of unfaithfulness: 1. Ignoring or misinterpretating the premise: Key plot events, characters, or settings required by the premise are not included or falsely represented in the story. 2. Hallucinating details that contradict the premise: The story introduces details that make the premise impossible. 3. Failure to maintain the specified tones, genres, or other constraints: The story do not use the surface-level constraints (correct tones, genres, point of views, length, etc.), as required by the premise. First, read the premise: {writing_prompt} Next, read the story: {story} Answer TRUE if the story is faithful to the premise. Answer FALSE if the story contains elements that render it unfaithful to the premise. First provide an explanation of your decision-making process in at most one paragraph, and then provide your final answer. Use the following format: <explanation>YOUR EXPLANATION</explanation> <answer>YOUR ANSWER</answer> Figure 18: Prompt for judging text relevance Prompt for generation You’re writing a story by repurposing a provided collection of snippets from other stories. Your story
https://arxiv.org/abs/2505.18128v1
will only be accepted for publication if it is approximately {verbatim_perc}% copied verbatim from snippets, with the other {new_perc}% being text you introduce for character, plot, tone, and event consistency. Your story should contain roughly {num_words} words. Given the below writing prompt and retrieved snippets, write the story that corresponds to the above specifications. Every time you add or change a word from the retrieved snippets, make sure to bold it so we know what you modified. You may use any of the snippets in any way you please, so spend time thinking about which snippets would work best. Be creative and make sure the story is coherent and entertaining! Please change character names and other minor elements to make the story unique to the prompt. You need to follow the below plan: # Plan: 1. Read through the prompt and snippets carefully to understand the tone and available material. 2. Select snippets that can be woven together to create a coherent narrative fitting the prompt. Many snippets are from serious dramas, historical fiction, or thrillers, so careful selection and modification will be needed. Consider all provided snippets before moving onto the next step. 3. Modify the chosen snippets, bolding all changes. Ensure character names, descriptions (like height), and actions align with the prompt. 4. Combine the snippets into a narrative, adding or changing words (bolded) if necessary for coherence. 5. Ensure that you do not have story beats that are primarily written by yourself (i.e., every story beat should consist mainly of text taken from snippets). 6. Track the word count, aiming for around {num_words} words. 7. Do not output story title or any irrelevant details. 8. Review the final story for adherence to the ~{verbatim_perc}% rule and coherence, and edit it if you have produced too many tokens of your own or if the story is too incoherent. # Writing prompt: {writing_prompt} # Snippets: {snippets} Figure 19: Prompt for generation Prompt for generation revise This story contains way too much of your own writing! It’s not even close to {verbatim_perc}% snippet use. Can you edit your story as needed to get much closer to the {verbatim_perc}% threshold? Output only the edited story. Figure 20: Prompt for generation revise Prompt for edit You are an editor who needs to revise the text so that it is coherent while adhering to the { verbatim_perc}% constraint and the writing prompt. Your task is to identify and minimally edit problematic text spans to resolve inconsistencies. Output "NO EDITS" if the text is already coherent. ### Guideline: 1. Read the generated story and writing prompt to understand the established context, plot, characters, and tone. 2. For each sentence in the text, identify the specific spans of inconsistency within the generated text. 3. Identify minimal edits needed to correct these inconsistencies while respecting the { verbatim_perc}% rule. - Contradictions: Information that conflicts with other details within the text (e.g., character traits, setting descriptions, established facts). - Continuity errors: Actions or details that conflict with the established timeline or sequence of events. - Point of View (POV)
https://arxiv.org/abs/2505.18128v1
Shifts: Unexplained or jarring changes in narrative perspective. - Irrelevant Content: Sentences or sections that disrupt the narrative flow, feel out of place, or seem like filler (e.g., leftover citation markers, placeholder text). - Mechanical Errors: Issues with pronoun agreement, verb tense consistency, awkward phrasing, or unclear sentence structure that hinder comprehension. 4. Implement the changes. Keep additions minimal, but feel free to delete larger spans ( phrases, sentences, paragraphs, etc.) whenever material is irrelevant or incoherent. 5. Review the final story for coherence adherence to the ~{verbatim_perc}% rule and coherence, and edit it if you have produced too many tokens of your own or if the story is too incoherent. 6. Output the edited writing and no other details. If there is no edit to be made, output "NO EDITS" Figure 21: Prompt for edit Prompt for nonfiction generation You’re writing a news article by repurposing a provided collection of snippets from other stories. Your news article will only be accepted for publication if it is approximately { verbatim_perc}% copied verbatim from snippets, with the other {new_perc}% being text you introduce for character, plot, tone, and event consistency. Your news article should contain roughly {num_words} words. Given the below writing prompt and retrieved snippets, write the news article that corresponds to the above specifications. Every time you add or change a word from the retrieved snippets, make sure to bold it so we know what you modified. You may use any of the snippets in any way you please, so spend time thinking about which snippets would work best. Be creative and make sure the news article is factual, coherent and entertaining! Please change character names and other minor elements to make the news article unique to the prompt. You need to follow the below plan: # Plan: 1. Read through the prompt and snippets carefully to understand the tone and available material. 2. Select snippets that can be woven together to create a coherent and factual narrative fitting the prompt. Many snippets are from serious dramas, historical fiction, or thrillers, so careful selection and modification will be needed. Consider all provided snippets before moving onto the next step. 3. Modify the chosen snippets, bolding all changes. Ensure character names, descriptions (like height), and actions align with the prompt. 4. Combine the snippets into a narrative, adding or changing words (bolded) if necessary for coherence and factuality. 5. Ensure that you do not have news article beats that are primarily written by yourself (i.e ., every news article beat should consist mainly of text taken from snippets). 6. Track the word count, aiming for around {num_words} words. 7. Do not output news article title or any irrelevant details. 8. Review the final news article for adherence to the ~{verbatim_perc}% rule, factuality and coherence, and edit it if you have produced too many tokens of your own or if the news article is too incoherent or non-factual. # Writing prompt: {writing_prompt} # Snippets: {snippets} Figure 22: Prompt for nonfiction generation Prompt for nonfiction generation revise This news article contains way too much
https://arxiv.org/abs/2505.18128v1
of your own writing! It’s not even close to { verbatim_perc}% snippet use. Can you edit your news article as needed to get much closer to the {verbatim_perc}% threshold? Output only the edited news article. Figure 23: Prompt for nonfiction generation revise Prompt for nonfiction edit You are an editor who needs to revise the text so that it is coherent and factual while adhering to the {verbatim_perc}% constraint and the writing prompt. Your task is to identify and minimally edit problematic text spans to resolve inconsistencies. Output "NO EDITS" if the text is already coherent and factual. ### Guideline: 1. Read the generated news article and writing prompt to understand the established context, plot, characters, and tone. 2. For each sentence in the text, identify the specific spans of inconsistency within the generated text. 3. Identify minimal edits needed to correct these inconsistencies while respecting the { verbatim_perc}% rule. - Contradictions: Information that conflicts with other details within the text (e.g., character traits, setting descriptions, established facts). - Continuity errors: Actions or details that conflict with the established timeline or sequence of events. - Point of View (POV) Shifts: Unexplained or jarring changes in narrative perspective. - Irrelevant Content: Sentences or sections that disrupt the narrative flow, feel out of place, or seem like filler (e.g., leftover citation markers, placeholder text). - Mechanical Errors: Issues with pronoun agreement, verb tense consistency, awkward phrasing, or unclear sentence structure that hinder comprehension. 4. Implement the changes. Keep additions minimal, but feel free to delete larger spans ( phrases, sentences, paragraphs, etc.) whenever material is irrelevant, incoherent, or non- factual. 5. Review the final news article for coherence adherence to the ~{verbatim_perc}% rule and coherence, and edit it if you have produced too many tokens of your own or if the news article is too incoherent or non-factual. 6. Output the edited writing and no other details. If there is no edit to be made, output "NO EDITS". Figure 24: Prompt for nonfiction edit
https://arxiv.org/abs/2505.18128v1
arXiv:2505.18129v1 [cs.CV] 23 May 2025 O n e R L t o S e e T h e m A l lVisual T riple Unified R einf or c ement L earningFull author list in Contributions1 Reinforcement learning (RL) has significantly advanced the reasoning capabilities of vision-language models (VLMs). However, the use of RL beyond reasoning tasks remains largely unexplored, especially for perception- intensivetaskslikeobjectdetectionandgrounding. Wepropose V-Triune , aVisualTripleUnifiedRei nforcement Learning system that enables VLMs to jointly learn visual reasoning and perception tasks within a single training pipeline. V-Triune comprises triple complementary components: Sample-Level Data Formatting (to unify diverse task inputs), Verifier-Level Reward Computation (to deliver custom rewards via specialized verifiers) , and Source-Level Metric Monitoring (to diagnose problems at the data-source level). We further introduce a novel Dynamic IoU reward, which provides adaptive, progressive, and definite feedback for perception tasks handled by V-Triune. Our approach is instantiated within off-the-shelf RL training framework using open-source 7B and 32B backbone models. The resulting model, dubbed Orsta(OneRL toSeeThemAll), demonstrates consistent improvements across both reasoning and perception tasks. This broad capability is significantly shaped by its training on a diverse dataset, constructed around four representative visual reasoning tasks (Math, Puzzle, Chart, and Science) and four visual perception tasks (Grounding, Detection, Counting, and OCR). Subsequently, Orsta achieves substantial gains on MEGA-Bench Core, with improvements ranging from +2.1 to an impressive +14.1 across its various 7B and 32B model variants, with performance benefits extending to a wide range of downstream tasks. These results highlight the effectiveness and scalability of our unified RL approach for VLMs. The V-Triune system, along with the Orsta models, is publicly available at https://github.com/MiniMax-AI. 35.138.3 11.925.943.745.8 +3.2MEGA-Bench Core (440 Real-World Vision Tasks) +14.1+2.1 Before V-Triune Training After V-Triune Training OrstaOrsta-7B Orsta-32B-0321Orsta-32B-0326 Figure1|PerformanceofOrstaonMEGA-BenchTasks . V-Triuneisevaluatedacrossvisualreasoning and visual perception tasks—Math, Science, Charting, Puzzle, Detection, Grounding, Counting, and OCR, demonstrating notable performance gains of Orsta over the backbone: +3.2%, +14.1%, and +2.1% in different model variants. 1Please send correspondence to model@minimaxi.com. ©2025 MiniMax. All rights reserved V-Triune: Visual Triple Unified Reinforcement Learning 1. Introduction The recent advancement of large language models (LLMs) (Guo et al., 2025; Jaech et al., 2024) and visual-language models (VLMs) (Google DeepMind, 2025; OpenAI, 2025) has seen a paradigm shift from pre-training scaling to test-time scaling. A key manifestation of this shift is the practice of extending context length for Chain-of-Thought (CoT) reasoning, which significantly enhances performance on complex tasks such as mathematics and coding (Guo et al., 2025; Xia et al., 2025). While reinforcement learning (RL) has emerged as a promising method for post-training VLMs, current research remains limited (Li et al., 2025a; Liu et al., 2025d,e; Ma et al., 2025a; Shen et al., 2025; Tan et al., 2025; Wang et al., 2025b; Yang et al., 2025; Yu et al., 2025a). Most prior work has focused on narrow task domains—typically visual reasoning tasks like math QA and Science QA (Huang et al., 2025; Meng et al., 2025; Yang et al., 2025), where the RL setup closely mirrors RL training paradigms in LLMs. Moreover, existing works (Liu et al., 2025a,c) remain an open
https://arxiv.org/abs/2505.18129v1
question whether RL can be effectively scaled to visual perception tasks such as object detection and grounding, which require distinct reward design and measures to ensure training stability. We introduce V-Triune ( VisualTripleUnified Rei nforcement L earning), the first unified RL system for post-training VLMs on both visual reasoning and perception tasks. V-Triune integrates three complementary components, each operating at a distinct level for this unification: Sample-Level Data Formatting (detailed in Sec. 3.1) handles diverse task and reward needs by allowing each sample to define its reward setup and chosen verifier. Verifier-Level Reward Computation (Sec. 3.2) offers key modularity and task-adaptability by assigning reward generation to specialized verifiers for specific task groups. Lastly, Source-Level Metric Monitoring (Sec. 3.3) provides essential tracking and diagnostics by logging metrics per data source, vital for spotting data issues and ensuring stable multi-task, multi-source learning. Beyond these core components, a key innovation in V-Triune is the Dynamic IoU reward (Sec. 3.4). This mechanism targets visual perception tasks like object detection and grounding, addressing issues with fixed IoU thresholds. By progressively adjusting the IoU reward threshold (from relaxed to stricter criteria), it ensures useful early learning signals, guides the model towards high-precision results, and ultimately enables stable, scalable training procedure. Leveraging the V-Triune system, we develop the Orsta model series, featuring variants with sizes ranging from 7B to 32B, built upon the Qwen2.5-VL family of baselines. These models undergo joint optimization across a diverse set of tasks, spanning visual reasoning (mathematics, science, chart, puzzle) and visual perception (object detection, grounding, OCR, counting). On the comprehensive MEGA-Bench core benchmark (Chen et al., 2024), which covers over 400 real-world visual tasks, Orsta demonstrates substantial performance gains. These improvements range from +2.1% up to an impressive +14.1% across its various 7B and 32B model variants. These performance benefits extend to prominent downstream benchmarks (including MMMU, MathVista, COCO, and CountBench), validating V-Triune’s effectiveness and scalability. Our core contributions are: •We introduce V-Triune, the first unified, scalable, and extensible RL system designed for jointly training VLMs on both visual reasoning and perception tasks within a single paradigm. •We propose the Dynamic IoU Reward, a novel, adaptive reward mechanism that significantly enhances stability and performance for visual perception tasks like detection and grounding. •We establish and demonstrate a comprehensive training methodology, including key engineering optimizations, enabling effective and stable RL training across eight diverse VLM tasks spanning both reasoning and perception. •We present Orsta, a family of high-performance models (7B-32B) trained with V-Triune, achiev- ing substantial gains (up to +14.1%) on the MEGA-Bench Core and strong performance across various downstream benchmarks. 2 V-Triune: Visual Triple Unified Reinforcement Learning 2. Preliminary The advent of Deepseek-R1 (Guo et al., 2025) has established Reinforcement Learning as one of the dominant paradigms for post-training LLMs. Specifically, training through RL canlead to remarkably strong performance on reasoning tasks such as mathematics, code, and puzzles (OpenAI, 2025; Xia et al., 2025). RL encompasses two main key components: the algorithm and (verifiable) reward function. 2.1. RL Algorithm We adopt Group Relative Policy Optimization (GRPO) (Shao et al., 2024), following standard practice,
https://arxiv.org/abs/2505.18129v1
but introduce two key modifications based on recent work (Hu et al., 2025; Yu et al., 2025b). First, we remove the reference model and its associated KL loss. This avoids restricting exploration (Hu et al., 2025) and prevents unstable KL divergence estimates (Schulman, 2020), while also reducing GPU memory usage and speeding up training. Second, we apply the clip-high trick and use token-level loss, which help increase output entropy, encouraging better exploration and improving training stability. With these changes, the GRPO objective is updated as follows: J(𝜃)=𝔼𝑞∼P(𝑞),{𝑜𝑖}𝐺 𝑖=1∼𝜋𝜃old(·|𝑞) " 1Í𝐺 𝑖=1|𝑜𝑖|𝐺∑︁ 𝑖=1|𝑜𝑖|∑︁ 𝑡=1min 𝑟𝑖,𝑡(𝜃)ˆ𝐴𝑖,𝑡,clip(𝑟𝑖,𝑡(𝜃),1−𝜀low,1+𝜀high)ˆ𝐴𝑖,𝑡#(1) where 𝑟𝑖,𝑡(𝜃)=𝜋𝜃(𝑜𝑖,𝑡|𝑞,𝑜𝑖,<𝑡) 𝜋𝜃old(𝑜𝑖,𝑡|𝑞,𝑜𝑖,<𝑡),ˆ𝐴𝑖,𝑡=𝑅𝑖−mean({𝑅𝑖}𝐺 𝑖=1) std({𝑅𝑖}𝐺 𝑖=1). (2) In this equation,P(𝑞)represents the distribution of queries.The behavior model 𝜋𝜃oldsamples a group of𝐺responses{𝑜𝑖}𝐺 𝑖=1for each query.Then, the advantage of the 𝑖-th response ˆ𝐴𝑖,𝑡is calculated from the normalized group-level rewards {𝑅𝑖}𝐺 𝑖=1, and then used to update the policy 𝜋𝜃. The clipping range of the importance sampling ratio 𝑟is bounded by 𝜖lowand𝜖high. 2.2. Reward function As discussed in Sec. 1, visual tasks involved can be grouped into two categories: visual reasoning and visual perception. To mitigate the risk of reward hacking (Weng, 2024), rule-based reward functions are employed for each task. For visual reasoning tasks, a binary (0-1) reward function is employed based on accuracy, as defined by the following rule: 𝑅acc(ˆ𝑎,𝑎)=( 1,ifverify(parse(ˆ𝑎),parse(𝑎))is True 0,else(3) where ˆ𝑎denotes the predicted answer and 𝑎represents the ground-truth (or golden) answer. ˆ𝑎is parsed from model output, which is instructed to be enclosed within \boxed{} , and verified against the golden answer 𝑎using math_verify (Kydlíček, 2025). For visual perception tasks, the format of the ground-truth varies considerably. The answers for counting and verifiable OCR tasks are typically numbers, words, or open-ended phrases, which can be conveniently enclosed in \boxed{} . Therefore, a rule-based accuracy reward, as defined in Eq. 3, is utilized for these tasks. In contrast, the answers for grounding and detection tasks are commonly structured in COCO-style JSONformat, which includes both a bounding box and a label. 3 V-Triune: Visual Triple Unified Reinforcement Learning Initial experiments revealed that the model struggled to learn enclosing such JSON format within the \boxed{} , but readily learned to include it within <answer></answer> tags. Consequently, for these two tasks, a format reward is established by referencing (HuggingFace, 2025) setup: 𝑅format(𝑜𝑞)=0.254∑︁ 𝑖=1𝕀(count(𝑜𝑞,𝑠𝑖)=1) 𝑠1=<think>,𝑠2=</think> ,𝑠3=<answer> ,𝑠4=</answer>(4) where𝑜𝑞represents the model’s response to question 𝑞, and𝑠𝑖denotes a specific format tag. The indicatorfunction 𝕀(condition)evaluatesto1iftheconditionistrue, and0otherwise. Fortheaccuracy reward, Intersection over Union (IoU) (Everingham et al., 2010) with a defined threshold and mean Average Precision (mAP) (Lin et al., 2014) are commonly adopted in prior works (Liu et al., 2025e; Shen et al., 2025; Yu et al., 2025a). The corresponding formulations for the IoU-based reward and mAP are provided in Eq. 5 and Eq. 7, respectively. 𝑅acc(ˆ𝑎,𝑎)=( IoU(ˆ𝑎,𝑎),if IoU(ˆ𝑎,𝑎)≥𝜖 0, else(5) where IoU(ˆ𝑎,𝑎)=Area(ˆ𝑎∩𝑎) Area(ˆ𝑎∪𝑎)(6) ˆ𝑎represents the predicted bounding box, and 𝑎denotes the golden bounding box. The threshold 𝜖 controls the strictness of the reward function, with higher values enforcing tighter matches. We adopt a dynamic 𝜖finally, as detailed in Sec. 3.4. The mAP is calculated using interpolated precision curve: AP𝑐=∫1 0max ˜𝑟≥𝑟𝑃𝑐(˜𝑟) | {z } interpolated
https://arxiv.org/abs/2505.18129v1
precision 𝑃interp 𝑐(𝑟)d𝑟, mAP =1 𝐶𝐶∑︁ 𝑐=1AP𝑐. (7) Finally, the overall reward combines accuracy and format alignment, expressed as: 𝛼acc·𝑅acc+𝛼format·𝑅format (8) where𝛼accand𝛼formatare the respective weighting coefficients for each reward component. 3. V-Triune: Visual Triple Unified Reinforcement Learning This section describes V-Triune, our visual triple unified RL system. As shown in Fig. 2, V-Triune’s main goal is to jointly train VLMs on both visual reasoning and perception tasks using a single, unified training pipeline. The system is built upon three core, interconnected parts designed to handle these diverse tasks together. The following subsections will explain these three core components in detail and introduce our novel Dynamic IoU reward mechanism. 3.1. Sample-Level Data Formatting This section introduces how data is formatted to support unified training across perception and reasoning tasks. A main challenge is that different tasks may require different types of rewards, components, and weighting strategies. For example, tasks like math, puzzle, and OCR compute rewards based on the correctness of textual answers, while detection and grounding tasks rely 4 V-Triune: Visual Triple Unified Reinforcement Learning step data source accuray ratio format ratio verifierSample-Level Data Format step data source accuray ratio format ratio verifier RL-TrainerPuzzleVQA Response Length Reflection RatioV3Det Source-Level Metric Monitoring Verifier-Level Reward ComputationDetectionVerifier MathVerifyVerifierDynamic IoU Threshold MathVerify/Detection VerifierVerify Accuracy Verify Format Data Batch Reward ResultMetric LogGrouped by data sourceV-Triune Figure 2|V-Triune System . It integrates three complementary components: Sample-Level Data Formatting (to unify diverse task inputs), Verifier-Level Reward Computation (for custom rewards via specialized verifiers), and Source-Level Metric Monitoring (to diagnose data-source level problems). Additionally, a novel Dynamic IoU reward offers adaptive, progressive feedback for perception tasks. on spatial metrics such as IoU and bounding box formatting. In conventional RL setups, reward computation is typically defined at the task level. While this allows modular reward functions to be implemented externally, it limits flexibility when fine-grained control is required. Many multimodal tasks may contain heterogeneous samples that demand different reward strategies. For instance, OCR data may include both plain-text lines and complex tables, each requiring different evaluation rules. Similarly, detection samples can differ significantly in terms of object count, annotation completeness, or visual difficulty, which suggests the need for sample-wise adjustment of reward behavior. To support such flexibility, we define reward configurations directly at the sample level. Each sample specifies the types of rewards to compute, their relative weights, and the associated verifier to use. This allows dynamic reward routing and fine-grained weighting during training without modifying core training logic. It can also support curriculum learning or data ablation strategies simply by adjusting metadata, making the system more extensible and maintainable. AsshowninFig.3,weimplementourdataschemausingtheHuggingFace datasets (Lhoestetal., 2021), which serves as the unified interface for all sources described in Sec. 5.1.1. Besides common fields such as images andprompt , the format includes several task-agnostic keys to support reward control. The reward_model field encapsulates all reward-related metadata, including fields like accuracy_ratio andformat_ratio , which define the weight of each reward type. By adjusting these values, specific rewards can be enabled, disabled, or reweighted per sample. The verifier field specifies which verifier should be used for evaluation
https://arxiv.org/abs/2505.18129v1
and provides any required parameters. More details on the verifier system are discussed in Sec. 3.2. The data_source field indicates the origin of each sample and is used in source-specific evaluation in Sec. 3.3. In summary, the sample-level formatting design enables seamless integration of diverse datasets into a unified training pipeline, while allowing highly flexible and scalable reward control. 3.2. Verifier-Level Reward Computation Unlike approaches using fixed reward functions, we implement a standalone, asynchronous reward server to generate RL signals. Decoupling reward computation from the main training loop offers key 5 V-Triune: Visual Triple Unified Reinforcement Learning Data Format { "data_source": Value(dtype="string"), "images": Sequence(feature=Image(mode=None, decode=True)), "prompt": [ { "content": Value(dtype="string"), "role": Value(dtype="string") } ], "ability": Value(dtype="string"), "reward_model": { "answer": Value(dtype="string"), "ground_truth": Value(dtype="string"), "accuracy_ratio": Value(dtype="float32"), "format_ratio": Value(dtype="float32"), "verifier": Value(dtype="string"), "verifier_parm": Value(dtype="dict") } }, "extra_info": { "id": Value(dtype="string"), "image_path": Value(dtype="string") } } Figure 3|Sample-level Data Scheme for Unified Training . This format, implemented using Hug- gingFace datasets, allows fine-grained control over reward computation by defining reward_model (including reward types, weights like accuracy/format_ratio ) and verifier specifications at the individual sample level. This enables flexible and scalable handling of diverse multimodal tasks. advantages: it enhances modularity and extensibility for diverse tasks, supports flexible deployment and independent scaling (crucial for GPU-based evaluations), improves maintainability, and boosts throughput via distributed, asynchronous processing. The term verifier-level defines our reward computation granularity: instead of monolithic functions, rewards are delegated to specialized verifiers, each handling specific task groups. We use an asynchronous client-server architecture (Fig. 4) using FastAPI (Ramírez, 2021). The client collects batches of samples from the dataloader and processes them in thread level parallel. Each sample is transformed into a structured payload containing task-specific key-value pairs. The client consists of multiple proxy workers that asynchronously send requests to the server with a dynamic balance strategy. The proxy server routes each request to the appropriate verifier based on the “verifier ” key in data format. Each verifier is user-defined, containing custom functions to compute task-specific rewards based onmodeloutputsandgroundtruth. Inourimplementation,weprimarilyusetwotypes: MathVerifyVer- ifier: Handles reasoning, OCR, and counting tasks by evaluating answer correctness. DetectionVerifier : Manages detection and grounding tasks, notably employing our novel Dynamic IoU reward (Sec. 3.4) alongside format-based rewards. This verifier-level approach grants significant flexibility and modu- larity. It greatly simplifies adding new tasks or updating reward logic independently, without altering the core training pipeline. 6 V-Triune: Visual Triple Unified Reinforcement Learning Figure 4|Architecture of the Asynchronous Reward Server. The RL trainer interacts with a remote server via client-server proxies, where specialized verifiers (e.g., MathVerify, Detection) compute rewards using task-specific logic and dynamic thresholds (e.g., dynamic IoU threshold). 3.3. Source-Level Metric Monitoring Monitoring training metrics is essential for understanding model dynamics and real-time issue diagnosis. However, for multi-task, multi-source training, aggregated or single-task metrics are often insufficient due to lack of traceability and per-source data variations. Therefore, we adopt source-level metric monitoring. This involves logging metrics per data_source for each batch. This approach helps identify faulty data sources, enables targeted debugging, and reveals cross-source learning interactions. Such granular monitoring is particularly vital in RL, where training can be
https://arxiv.org/abs/2505.18129v1
unstable, to verify stability and model behavior. We note that while many RL infrastructures support metric logging (Hu et al., 2024; Sheng et al., 2024), our source-level detail provides deeper insights. Empirical signs of training stability are further discussed in Sec. 4. Specifically, we log per-source reward values to trace dataset-specific stability. For perception tasks like grounding and detection, we also log detailed per-source IoU values (at thresholds 50, 75, 95, 99) and mAP scores for fine-grained convergence insights. Inspired by Ma et al. (2025b), we also track model behavior by monitoring response length and re- flection ratio per source. For response length, we log various metrics including overall average, lengths of correct/incorrect responses, and truncation rates. Truncation rates (outputs hitting maximum length) can indicate overly verbose or collapsed generation. For reflection ratio, we track 15 pre- defined reflection-related words (e.g., "re-check", "re-think", "verify" ). We then compute reflection ratio: the proportion of responses containing these reflection words and correct ratio in reflection responses : the accuracy of responses that include these words. This helps diagnose model tendencies like overthinking versus superficial responses by linking reflection to correctness. All mentioned metrics are consistently logged per data source. 3.4. Dynamic IoU Reward In object detection and visual grounding, Intersection over Union (IoU) and mean Average Precision (mAP) are standard metrics for evaluating the overlap between predicted and ground-truth boxes. While mAP summarizes performance across multiple IoU thresholds, our preliminary detection experiments show that using a thresholded IoU reward achieves comparable overall performance while providing more interpretable and controllable feedback (Fig. 5a). Therefore, we adopt an IoU-based reward strategy for its robustness and stronger alignment with localization accuracy (Eq. 5). However, relaxed thresholds like 𝜖=0.5(common option (Liu et al., 2025e; Yu et al., 2025a)) can be too lenient for RL in VLMs, especially when compared to the strict exact-match rewards used 7 V-Triune: Visual Triple Unified Reinforcement Learning 0 5 10 15 20 25 Steps (a)0.5900.5950.6000.6050.610mAP Score COCO T estset (Multi-objects, selected) Performance IoU@50_label-search IoU@50_iou-search Hybrid_IoU@50_label-search_mAP Hybrid_IoU@50_labelsearch_mAP_mAP@50 0 20 40 60 80 100 Steps (b)0.6350.6400.6450.6500.6550.660mAP Score COCO T estset (Single-Objects, selected) Performance IoU@50_Single-Objects-FreezeVit IoU@99_Single-Objects-FreezeVit 0 20 40 60 80 Steps (c)0.520.530.540.550.56mAP Score COCO T estset (Multi-Objects) Performance IoU@dyn_Multi-Objects-FreezeVit IoU@99_Multi-Objects-FreezeVit 0 20 40 60 80 Steps (d)0.4000.4250.4500.4750.5000.5250.5500.575mAP Score OVDEval T estset (Negation) Performance IoU@dyn_OVD-Negation-FreezeVit IoU@99_OVD-Negation-FreezeVit Figure 5|COCO Test Set Performance with Various Reward Designs. (a) Comparison between IoU-based and mAP-based rewards on a selected COCO multi-object subset; (b) Comparison between vanilla IoU reward and rule-based IoU reward on a selected COCO single-object subset; (c, d) Comparison between rule-based IoU reward and dynamic IoU reward on the COCO multi-object subset and the OVDEval negation subset. in reasoning tasks (Eq. 3). Moreover, these loose thresholds result in reward ambiguity: even if predicted bounding boxes merely shift or are oriented differently around the ground-truth, they can still receive the same high reward. Such ambiguity can ultimately lead to a decline in performance during later training stages for detection task (Fig. 5b). 0 20 40 60 80 Steps0.000.050.100.150.200.250.300.35Rewards Training Accuracy Rewards IoU@99_Rewards-Acc-FreezeVit IoU@dyn_Rewards-Acc-FreezeVit Figure 6|Training accuracy rewards under Dynamic
https://arxiv.org/abs/2505.18129v1
IoU versus a fixed Rule-based IoU (IoU@99).To address these issues, we initially use a rule-based IoU reward scheme that enforces a stringent threshold of 𝜖=0.99, requiring near-exact spatial alignment between predicted and ground-truth boxes (Fig. 5b). This strict criterion enhances con- sistency between perception and reasoning signals and, by pro- viding an unambiguous target, aims to improve training stability. However, its very stringency introduces a cold-start problem in early rollouts, where most predictions receive 0 reward. To mitigate this cold-start problem, we adopt a dynamic IoU reward strategy, inspired by curriculum learning (Bengio et al., 2009). TheIoUthreshold 𝜖isadjustedinstagesbasedontraining progress: starting at 0.85 for the initial 10% of training steps, increasing to 0.95 for the subsequent 15% of steps (i.e., from 10% to 25% of total steps), and finally settling at 0.99 for the remainder of training (as illustrated in Fig. 6). 4. Training Recipe V-Triune enables a scalable system of data, tasks, verifiers, and metrics as the foundation of training. However,earlyexperimentsrevealedthatjointtrainingcanleadtoinstability: (1)degradedevaluation performance, (2) sudden spikes in gradient norms, (3) large entropy fluctuations, and (4) abrupt increases in response length, particularly in incorrect outputs. We address training instability and scalabilitythroughtargetedadjustments,includingfreezingViTtopreventgradientexplosion,filtering spurious image tokens, randomizing CoT prompts, and decoupling evaluation to manage memory during large-scale training. 4.1. Disable ViT Training In initial experiments, we performed full-parameter training by jointly optimizing the ViT and LLM. However, detection performance consistently collapsed after several dozen steps, regardless of 8 V-Triune: Visual Triple Unified Reinforcement Learning 0 50 100 Step (a)0.510.520.530.540.550.56IoU Score COCO T estset (Multi-ojbects) Performance Train ViT+LLM Train ViT Train LLM 0 50 100 Step (b)012345ValueSum of Gradient Norms Train ViT+LLM Train ViT Train LLM 0 10 20 30 Layer Num (c)0.0250.0500.0750.1000.1250.1500.1750.200Value Layer Gradient Norm in ViT+LLM Training ViT Grad (Avg. over Steps) LLM Grad (Avg. over Steps) Figure 7|Analysis of ViT Training Instability. (a) COCO testset (OOD) performance comparison. (b) Sum of gradient norms under different training schemes. (c) Layer-wise gradient norms of ViT and LLM during full parameter training. Notably, incorporating ViT training leads to a performance decline and highly unstable gradients; remarkably, ViT’s gradients amplify during back-propagation, contrasting with the stable layer-wise gradients of the LLM. hyperparameter settings. Log analysis revealed unusually large and spiking gradient norms (often >1), suggesting instability originating from the ViT. To verify this, we conducted three training configurations: (1) LLM-only, (2) ViT-only, and (3) full-parameter training, all using identical RL settings on Orsta-7B with mixed task data. We monitored: (a) COCO test set performance, (b) total gradient norm, and (c) layer-wise gradient trends during full-parameter training. As shown in Fig. 7a, joint training leads to a performance drop, whereas LLM-only training maintains stable gains. ViT-only training yields minimal improvement, indicating that RL benefits primarily stem from updating the LLM. Figure Fig. 7b shows that ViT training produces significantly higher gradient norms—over 10 ×larger than LLM-only training. Layer-wise analysis (Fig. 7c) confirms this: LLM gradients remain stable across layers, while ViT gradients amplify during backpropagation, with the first layer exhibiting 5–10 ×larger norms than the last. This gradient explosion destabilizes training and undermines visual performance. Consequently, we freeze
https://arxiv.org/abs/2505.18129v1
ViT parameters in subsequent experiments. The root cause of this instability remains an open research question, but we offer two key insights. First, RL not only activates VLM capabilities but also enforces modality alignment by grounding responses in visual content. When ViT and LLM are trained jointly, the visual representation—i.e., the alignment target—shifts constantly, leading to instability analogous to the concept drift problem in machine learning (Gama et al., 2014). This dynamic target undermines stable optimization and may cause model collapse. Alternating training, similar to GANs (Goodfellow et al., 2020), where one component is frozen while the other is updated, could offer a solution. Second, ViT’s contrastive pretraining may limit its suitability for RL, as it encourages static, instance-level features rather than the dynamic and causal representations needed for RL tasks. To mitigate this mismatch, auxiliary self-supervised objectives could be introduced during RL to help ViT adapt to the evolving task demands. 4.2. Mitigating Spurious Image Special Tokens 9 V-Triune: Visual Triple Unified Reinforcement Learning Sample Figure 8|An Example of Spurious Image Tokens.To enable accurate advantage estimation, logits for both the query and the gener- ated response are recomputed, as those returned by the inference engine may be imprecise. During the forward pass, image placeholders (highlighted in the red box inFig.8,appearingbeforethe“vision_end“ token)arereplacedwithvisualfeaturesex- tracted by the ViT and adapter modules. However, the model may mistakenly gener- ate special tokens (highlighted in the blue box in Figure 8), such as image or video placeholders, that lack corresponding features—particularly under RL-zero settings. To ensure in- put–feature alignment and maintain training stability, a filtering step is applied to remove all such special tokens from the rollout sequence prior to recomputation. 4.3. CoT Prompt Pool In the early stages of training for visual mathematics tasks, variations in CoT prompts, despite conveying identical meanings, which can influence model performance, affecting metrics such as accuracy and response length. To reduce this variability, we construct a CoT prompt pool comprising 10 alternatives for “Let’s think step by step” and 10 for “Place the answer in \boxed{} .” During training, one sentence from each group is randomly selected and appended to the instruction. This strategy mitigates prompt-induced variance and is applied specifically to samples verified with MathVerifyVerifier . 4.4. System Memory Management V-Trinue is implemented atop Verl, a single-controller training framework that can approach system memory limits on the master node, particularly with large-scale vision datasets. To enable effective OOD performance monitoring, we introduce online test-set benchmarking at regular intervals. To mitigate the resulting system overhead, we decouple the testing phase from the main training loop and batch-process benchmarks, bypassing the default vLLM data handling. 5. Experiment 5.1. Implementation Details Model: We adopt Qwen2.5-VL-7B-Instruct and Qwen2.5-VL-32B-Instruct (0321 and 0326 versions, denoting their release dates) as our base models, which are trained on 4 trillion tokens during the multimodal pre-training stage, followed by supervised fine-tuning (SFT) on 2 million instances. They are chosen for their strong performance in vision-language reasoning and perception tasks. Framework and Hardware: V-Triune is implemented on verl (Sheng et al., 2024). We enable native FSDP for training and
https://arxiv.org/abs/2505.18129v1
use vLLM for generation. All experiments are conducted on 64 NVIDIA H20 GPUs to meet the memory-intensive requirements of RL training. 10 V-Triune: Visual Triple Unified Reinforcement Learning Visual Reasoning Data Visual Perception DataRule-based Filter Multi-Choice and True/False Filter Symbol-containing Answer Filter Overlong and Complex Answer Filter Overlong Query Filter (Image + Text) Bounding Box Size and Number Filter Category Balance Filter Verifiable Answer FilterDifficulty Filter Data Curation Reasoning Perception pass@16 IoU Reward Figure 9|Data Curation Process . First, visual reasoning and visual perception data pass through a rule-basedfilter, whichremovessamplesthatdonotmeetpresetcriteria. Subsequently, thedataenters a difficulty filter, which removes samples that are too easy or too hard based on model performance, ultimately producing the Curated Dataset. 5.1.1. Data Curation Weselectfourreasoningtasks—Math,Puzzle,Science,andChart—fortheirvariedreasoningdemands, and four perception tasks—Detection, Grounding, Counting, and OCR—for their broad coverage of visual understanding. Data sources for each task are listed below: •FortheMathtask, mm_math (Sunetal.,2024), geometry3k (Luetal.,2021),and mmk12(Meng et al., 2025) are chosen. •For the Puzzle task, PuzzleVQA (Chia et al., 2024) and AlgoPuzzleVQA (Ghosal et al., 2024) are merged due to their shared origin, and VisualPuzzles (Song et al., 2025) is additionally included. •For the Science task, ScienceQA (Lu et al., 2022), SciVQA (Borisova and Rehm, 2025), and the “Broader STEM Topics” and “(GradeSchool) Science” categories from ViRL39K (Wang et al., 2025b) are used. •FortheCharttask, ChartQAPro (Masryetal.,2025), ChartX (Xiaetal.,2024), Table-VQA (Kim et al., 2024), and the Tables/Diagrams/Charts categories from ViRL39K (Wang et al., 2025b) are used. •For the Detection task, V3Det(Xie et al., 2023) and Object365 (Shao et al., 2019) are chosen. •For the Grounding task, 𝐷3(Xie et al., 2023) is used. •For the Counting task, CLEVR(Johnson et al., 2017; Tan et al., 2025) is used. •For the OCR task, English OCR questions are extracted from LLaVA-OV Data (Li et al., 2024) andEST-VQA (Wang et al., 2020). To reduce noise, we apply a two-stage data filtering process (Figure Fig. 9): (1) rule-based filtering and (2) difficulty-based filtering. This yields 47.7K high-quality samples across 18 datasets and 8 tasks. To mitigate dataset bias, puzzle data is duplicated to ensure sufficient coverage. The final corpus includes approximately 20.6K perception and27.1K reasoning samples, primarily consisting of single-image, single-turn conversations. 5.1.1.1 First Stage: Rule-based Filter For the four visual reasoning tasks, the following filters are applied: •Multiple-choice and true/false questions that are prone to hacking are discarded. (Team et al., 2025b) 11 V-Triune: Visual Triple Unified Reinforcement Learning •Answers containing symbols such as “=”, “[”, “]”, “(”, “)”, and “;” are removed, as the absence of these symbols may cause answer mismatches even if the numeric values are correct. •Answers longer than 20 characters are discarded to avoid overly complex answers. The filtering process for visual perception tasks involves additional complexity: •Detection: Following Qwen2.5-VL (Bai et al., 2025), data is converted to relative coordinates. Single-box samples contain one box per category, while multi-box samples retain original annotations. Samples with over 10 boxes per category or boxes exceeding 50% of the image are removed. A 1:2 single-to-multi-box ratio is enforced, and category-level long tails are avoided. •Grounding: Data is processed into relative coordinates, and data with a box
https://arxiv.org/abs/2505.18129v1
size greater than 50% of the image is discarded. Complex phrase labels are filtered out. •Counting: Data is balanced per category and only English data is retained. •OCR:OnlyEnglishOCRdataisretained,andfinallabelsmustbeverifiableby math_verify (Ky- dlíček, 2025). Since no verifiable reward model (RM) is designed, the OCR task data must pass this validation. 5.1.1.2 Second Stage: Difficulty-based Filter To remove low-value samples, easy questions already solvable by the base model are filtered out. For reasoning tasks, we use Qwen2.5-VL-32B-0321 to compute pass@8, retaining only samples with 0≤pass@8<100%. For perception tasks, specifically detection and grounding, pass@16 is computed using Qwen2.5-VL-7B with a 0.5 IoU threshold, and samples with cumulative IoU rewards between 2 and 10 are selected. All curated data is stored in Parquet format (Apache Software Foundation, 2025) and uniformly mixed for training without online filtering or curriculum scheduling. 5.1.2. Training Details Two distinct RL training paradigms are explored: on-policy and off-policy. For both settings, the rollout batch size is fixed at 1024 across all experiments. The backward batch size is set to 1024 for on-policy and 128 for off-policy. All experiments use the GRPO algorithm, generating 8 candidate sequences per prompt. To promote exploration while maintaining stability, we apply the clip-high strategy from DAPO (Yu et al., 2025b) with clipping thresholds 𝜀high=0.28and𝜀low=0.2. This encourages low-probability token sampling without destabilizing training. No reference model is used; the final loss is computed solely via token-level mean PPO-clip loss. To further stabilize learning, the ViT and connector modules remain frozen throughout. Learning rates are set to 1×10−6(on-policy) and 5×10−7(off-policy), with a 5% warmup before being held constant. During rollout, we use the vLLM engine with a temperature of 1.0, top- 𝑝of 1.0, and a maximum sequence length of 2048. Greedy decoding is applied during testing. 5.1.3. Evaluation Benchmarks To comprehensively assess the model’s capabilities, we evaluate performance across three domains: real-world tasks, visual reasoning, and visual perception. 12 V-Triune: Visual Triple Unified Reinforcement Learning Query Example of Detection and Grounding Please detect all instances of the following category within the image: {LABEL}. Let’s think step by step and output the final answer in <answer> and </ answer> tags. For example: Your detailed reasoning process here. <answer> [{’bbox_2d’: [x1,y1,x2,y2],’label’: label_name}] </answer> Figure 10|Example query format for detection and grounding tasks. The query instructs VLMs to identify instances of a given object and format the output in a specific reasoning-answer format. For real-world task evaluation, we employ the core subset of MEGA-Bench (Chen et al., 2024), which consists of 440 diverse, long-tailed tasks encompassing over 6,000 expert-curated samples. All results are reported using the official MEGA-Bench evaluation implementation to ensure consistency. To evaluate reasoning and knowledge capabilities, we adopt MMMU(Yue et al., 2024) and MathVista (Lu et al., 2023). Both benchmarks are assessed using VLMEvalKit with default prompts (i.e.,withoutChain-of-Thought)andinferencesettings. SinceGPT-4oisusedforbothanswerextraction and scoring—which may introduce variability—we report both rule-based and GPT-4o-based scores for MMMU. For visual perception evaluation, we include COCO(val-2017)(Lin et al., 2014), OVDEval (Yao et al., 2023), CountBench (Paiss et al., 2023), OCRBench (v2)(Fu et al., 2024), and ScreenSpot- Pro(Li et al., 2025b). We report both IoU and mAP for
https://arxiv.org/abs/2505.18129v1
COCO and OVDEval. Notably, mAP and NMS-mAP are computed as the mean of sample-level metrics, rather than over the entire dataset, due to the absence of confidence scores in model outputs. These benchmarks are evaluated using simple task-specific queries, with examples provided below. OCRBench is evaluated using LMMs-Eval(Zhang et al., 2024) on the entire dataset, encompassing both English and Chinese samples. ScreenSpot-Pro is assessed with its official evaluation code, using instruction-style English prompts, positive ground truth, and all task types. All bounding boxes and keypoints are represented using coordinate values relative to the original input image dimensions. 5.2. Performance 5.2.1. MEGA-Bench A comprehensive comparison of Orsta against its backbone and leading general-purpose/reasoning- enhanced VLMs is available in Tab. 1. Orsta shows consistent gains at both 7B and 32B scales: Orsta-7B achieves 38.31 (+3.2) on MEGA-Bench Core, and Orsta-32B reaches 45.78 (+2.1). Our method V-Triune notably boosts performance in domains with enriched training data—mathematics (+3.8 at 7B, +5.4 at 32B), perception, planning, and science—indicating strong generalization in both reasoning and perception tasks. In contrast, coding and metric-related tasks show limited improvementduetosparsesupervision,underscoringthetargetedscalabilityofourunifiedRLtraining approach. 13 V-Triune: Visual Triple Unified Reinforcement Learning Table 1|PerformanceofOrstaonMEGA-Benchcore . Comparison of general-purpose and reasoning- enhanced VLMs (7B+ & 32B+). Models with improved reasoning are marked by ♂lightbulb; final scores are reported as weighted averages. QwenVL-2.5-32B-0321 has known issues, which are resolved in the 0326 version. All results are obtained using the official MEGA-Bench evaluation code, except for Gemma3-27B (†). Model Knowledge Mathematics Perception Coding Info. Ex. Planning Science MetricsMEGA-Bench Core 7B+ Model QwenVL-2-7B 39.96 25.95 39.99 31.49 40.29 16.64 28.59 43.61 34.47 QwenVL-2.5-7B 38.84 27.67 41.24 28.93 50.23 16.32 36.75 41.64 35.07 InternVL-3-8B 36.64 32.75 42.17 35.11 48.92 14.35 36.51 53.94 36.48 Gemma3-12B 41.11 29.10 37.38 30.27 46.56 16.10 36.83 50.40 35.04 Kimi-VL-A3B 37.63 27.07 39.50 22.30 40.99 22.17 33.94 46.65 34.40 MM-Eureka-7B ♂lightbulb 40.12 31.59 39.71 28.75 49.32 16.64 37.25 46.39 35.96 VL-Rethinker-7B ♂lightbulb 40.65 30.08 42.02 29.87 52.03 17.83 36.82 46.90 37.25 Kimi-VL-A3B-Thinking ♂lightbulb33.45 17.76 28.11 14.69 41.14 12.64 28.60 43.97 27.08 Orsta-7B (Ours) ♂lightbulb 41.65 31.48 43.84 32.82 54.07 17.83 36.91 41.66 38.31 △(Ours - Backbone) +2.8 +3.8 +2.6 +3.9 +3.8 +1.5 +0.2 +0.0 +3.2 32B+ Model QwenVL-2.5-32B-0321 8.48 12.62 11.99 13.59 15.44 8.61 16.78 14.91 11.87 MM-Eureka-32B ♂lightbulb 12.20 20.19 21.88 15.86 21.23 15.47 19.95 22.77 18.57 VL-Rethinker-32B ♂lightbulb 12.16 28.09 22.99 11.89 21.50 15.09 28.10 15.73 19.41 Orsta-32B-0321 (Ours) ♂lightbulb21.33 28.55 32.23 19.44 26.38 17.78 33.20 24.18 25.94 △(Ours - Backbone) +12.9 +15.9 +20.2 +5.9 +10.9 +9.2 +16.4 +9.3 +14.1 Gemma3-27B† 49.43 42.20 45.46 40.18 49.30 24.96 47.08 58.99 41.82† QwenVL-2.5-32B-0326 46.09 32.04 47.55 38.36 61.65 28.43 37.55 50.38 43.67 InternVL-3-38B 46.32 40.29 55.05 45.29 56.63 22.88 52.04 58.04 46.69 Skywork-R1V-38B ♂lightbulb 25.59 28.45 22.95 19.88 19.53 9.74 22.64 37.55 21.54 Skywork-R1V2-38B ♂lightbulb 17.08 12.38 15.65 7.14 9.90 17.60 14.29 0.0 15.39 Orsta-32B-0326 (Ours) ♂lightbulb46.78 37.43 50.86 38.92 63.14 28.05 42.68 53.01 45.78 △(Ours - Backbone) +0.7 +5.4 +3.3 +0.6 +1.5 -0.4 +5.1 +2.6 +2.1 Fig. 11 shows the MEGA-Bench performance trajectories of three Orsta variants (7B, 32B-0321, 32B-0326) under on-policy and off-policy RL. All
https://arxiv.org/abs/2505.18129v1
variants exhibit stable improvement, with on-policy training generally outperforming off-policy. The 7B model shows smoother and more pronounced gains, while the 32B models experience slower or more variable progress, indicating increased optimization challenges at larger scales. Qwen2.5-VL-0321, a publicly released checkpoint, exhibits known issues in perception and out- put formatting but performs reliably in reasoning tasks, as confirmed by our evaluations and VL- Rethinker (Wang et al., 2025a). These issues are addressed in the subsequent 0326 release. We regard the 0321 version as a clean baseline with core knowledge capabilities. As shown in Fig. 12, Orsta-32B- 0321 demonstrates that reinforcement learning primarily enhances existing model strengths, serving 14 V-Triune: Visual Triple Unified Reinforcement Learning 0 50 100 Step (a)35.035.536.036.537.037.538.0Score 35.0738.2738.36Orsta-7B Performance on-policy off-policy 0 25 50 75 100 125 Step (b)1214161820222426Score 11.8725.94 18.90Orsta-32B-0321 Performance on-policy off-policy 0 50 100 Step (c)43.544.044.545.045.5Score 43.6745.77 45.21Orsta-32B-0326 Performance on-policy off-policy Figure 11|Training Trends of On-Policy vs Off-Policy Across Three Model Variants on MEGA- Bench core (7B, 32B-0321, 32B-0326) . Models are evaluated every 5 steps from step 0 to 135. Starting points and peak performances are annotated on the curves. as an alignment mechanism rather than introducing new capabilities. Performance gains are most notable in in-domain tasks—mathematics, perception, science, and planning, while out-of-domain tasks like coding see limited improvement, underscoring RL’s alignment-focused impact. In summary, our results demonstrate that reinforcement learning effectively enhances both visual reasoning and perception within a unified framework. RL consistently improves performance across MEGA-Bench core’s 440 diverse tasks, supporting its role as a general-purpose alignment strategy that unlocks latent capabilities in pretrained vision-language models. 5.2.2. Common Downstream Tasks 0 20 40 60 80 100 120 140 Step5101520253035Score Core Know.MathPerc. CodeInfo.Ex. PlanSci. MetricsOrsta-32B-0321 T ask-level MEGA-Bench Performance Core Know. MathPerc. Code Info.Ex.Plan Sci. Metrics Figure 12|Training Trends of Orsta-32B-0321 on MEGA-Bench core.The dark line denotes the overall MEGA-Bench Core, link- ing to the performance shown in Fig. 11.As shown in Tab. 2, on the general knowledge benchmark MMMU, Orsta outperforms its backbone by 4% at 7B and 1% at 32B-0326. On the math-focused MathVista benchmark, it achieves over 5% gains across all model sizes. These results align with improvements seen on math tasks in MEGA-Bench, reinforcingOrsta’sstrengthinenhancingreasoningcapabilities. Orsta consistently improves visual perception across bench- marks. On COCO detection, Orsta-7B achieves notable gains (+7.81 mAP and +12.17 mAP@50 for single-object; +3.77 mAPand+5.48mAP@50formulti-object),withstrongergains in simpler scenarios. Orsta-32B-0321 shows marked improve- ments, addressing prior perception issues, while Orsta-32B- 0326 yields +3% mAP gains across both subsets. On OVDEval, Orsta-7B and 32B improve by +5.3 and +3.5 mAP, validating the dynamic IoU reward. GUI and OCR tasks (ScreenSpot- Pro, OCRBench) show consistent 1–2% gains. CountBench sees the most significant boost, with Orsta-7B outperforming 32B SFT models and Orsta-32B setting a new state of the art. Overall, V-Triune delivers greater perception improvements for less-aligned base models (0321) than for already-instructed ones (0326). 15 V-Triune: Visual Triple Unified Reinforcement Learning Table 2|Benchmarking Orsta Against Baselines on Common Visual Reasoning and Perception Tasks. For COCO, we report both mAP and mAP@50 as evaluation metrics (mAP|mAP@50), while OVDEval is evaluated using NMS-AP
https://arxiv.org/abs/2505.18129v1
(task-wide mAP). Accuracy serves as the evaluation metric for all remaining tasks. Tasks QwenVL-2.5-7B Orsta-7BQwenVL-2.5-32B 0321Orsta-32B 0321QwenVL-2.5-32B 0326Orsta-32B 0326 Visual Reasoning MMMU 𝑣𝑎𝑙,𝑟𝑢𝑙𝑒 45.56 49.70 37.11 34.67 39.22 38.00 MMMU 𝑣𝑎𝑙,𝑚𝑜𝑑𝑒𝑙 54.40 57.10 60.80 64.11 64.20 64.78 MathVista 𝑡𝑒𝑠𝑡𝑚𝑖𝑛𝑖 67.50 72.50 70.80 76.30 73.40 76.40 Visual Perception COCO𝑠𝑖𝑛𝑔𝑙𝑒 35.02 | 62.26 42.83 | 74.43 12.19 | 21.43 31.64 | 53.44 40.26 | 68.07 42.12 | 71.55 COCO𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑒 59.59 | 74.74 63.36 | 80.22 1.61 | 2.00 50.32 | 63.33 62.80 | 78.75 64.28 | 81.19 OVDEval 52.23 57.52 3.84 35.65 56.80 60.36 ScreenSpot-Pro 22.71 23.91 49.46 50.98 51.23 52.69 CountBench 71.69 82.28 83.71 88.59 83.91 88.19 OCRBench 55.11 56.05 44.85 43.78 58.14 59.09 5.3. Training Metric Analysis V-Triune’s source-level metric monitoring enables detailed analysis of cognitive behaviors learned during training, such as reflection patterns. To explore whether perception tasks can benefit from test-time scaling via extended CoT, we analyze Orsta-32B-0321 off-policy logs, which show a marked increase in response length. The analysis focuses on four tasks—Math (MMK12), Puzzle (PuzzleVQA), Detection (V3Det), and OCR (estvqa)—each represented by a single dataset. We report three key metrics: response length, reflection ratio, and accuracy of reflection responses, as defined in Sec. 3.3. Fig. 13 shows the evolution of key metrics across training steps for the selected tasks. Reasoning tasks (Math and Puzzle) consistently exhibit higher response lengths and reflection ratios than perception tasks (Detection and OCR). For reasoning tasks, response length generally increases over time, with Puzzle showing a steady rise and Math displaying more fluctuation. Among perception tasks, OCR also shows a clear upward trend in response length, while Detection initially decreases before stabilizing around a mean of 325 tokens, without a clear upward trajectory. Themiddle row ofFig. 13shows aclearincreasein reflectionratiosforMath andPuzzle, indicating a growing use of reflective steps during training. OCR also exhibits an upward trend, though with greater variability. In contrast, Detection maintains a consistently low reflection ratio, near zero, suggesting little engagement in reflective reasoning. In the bottom row, the correctness of reflection responses improves steadily for Puzzle and gradually for OCR, rising from 0.4 to 0.7–0.8. Math shows fluctuating correctness between 0.3 and 0.7, without a clear trend but indicating partial effectiveness. Detection, however, remains at zero throughout, reflecting the absence or failure of reflective responses. Overall, reasoning tasks and OCR demonstrate increasing reflection usage and, to varying degrees, improved reflection quality. Detection diverges from this pattern, showing minimal reflective behavior and no apparent benefit from longer, reasoning-style responses. 16 V-Triune: Visual Triple Unified Reinforcement Learning 0 50 100750800850Math(MMK12) Response Length 0 50 100550600650700Puzzle(PuzzleVQA) Response Length 0 50 100300325350375400Detection(V3Det) Response Length 0 50 100240260280300320OCR(estvqa) Response Length 0 50 1000.060.080.100.12Math(MMK12) Reflection Ratio 0 50 1000.100.150.200.250.30Puzzle(PuzzleVQA) Reflection Ratio 0 50 1000.000.020.040.060.080.10Detection(V3Det) Reflection Ratio 0 50 1000.000.020.040.060.080.10OCR(estvqa) Reflection Ratio 0 50 100 Step0.30.40.50.60.7Math(MMK12) Correct Ratio in Reflection Responses 0 50 100 Step0.30.40.50.60.7Puzzle(PuzzleVQA) Correct Ratio in Reflection Responses 0 50 100 Step0.04 0.02 0.000.020.04Detection(V3Det) Correct Ratio in Reflection Responses 0 50 100 Step0.00.20.40.60.81.0OCR(estvqa) Correct Ratio in Reflection Responses Figure 13|Training dynamics of response length (top row), reflection ratio (middle row), and correct ratio in reflection
https://arxiv.org/abs/2505.18129v1
responses (bottom row) during training steps for Math (MMK12), Puzzle (PuzzleVQA), Detection (V3Det), and OCR (estvqa) tasks using the Orsta-32B-0321 off-policy setting. Each column corresponds to a different task, and each row represents a distinct metric. 5.4. Ablation Study Training-strategy ablation As shown in Fig. 14a, updating only the LLM stack raises the score from approximately 35 to 38 by step 120, indicating that reasoning weights are the primary driver of performance improvement. In contrast, tuning only the vision backbone results in negligible gains, with scores plateauing around 35.5. When both branches are optimized jointly, performance initially mirrors the LLM-only trajectory but slightly surpasses it in later stages (peaking at approximately 38.5). Task-composition ablation Training on both reasoning and perception data yields the strongest per- formance, reaching approximately 37.5. A reasoning-only curriculum closely trails (within 0.3 points) and briefly surpasses it mid-training, suggesting that the benchmark prioritizes logical competence over pure perception. Perception-only training consistently lags by 0.7–1.0 points but still shows steady improvement, indicating that visual supervision provides transferable alignment signals. The consistent performance hierarchy (Reasoning+Perception > Reasoning > Perception) underscores the value of mixed-task corpora: combining complementary signals leads to additive gains rather than diluted optimization. Learning-rate ablation For the Orsta-32B model, conservative step sizes are essential. A learning rate of 1e-6 yields the highest and most stable plateau (45.5), while 1.5e-6 performs similarly until a mild degradation after 80 steps. Increasing the rate to 2e-6 causes a late-stage collapse to 38, and 3e-6 diverges catastrophically after 50 steps, dropping below 36. This pattern suggests that larger 17 V-Triune: Visual Triple Unified Reinforcement Learning 0 25 50 75 100 125 Step (a)3536373839Score Ablation on Different Training Strategies Train LLM Train ViT Train LLM + ViT 0 20 40 60 Step (b)35.035.536.036.537.037.5Score Ablation on Training T asks Reason+Perception Reason Perception 0 25 50 75 100 Step (c)363840424446Score Ablation on Learning Rate (32B-0326) 1e-6 1.5e-6 2e-6 3e-6 Figure 14|Ablation Study on Training Strategies (a), Task Composition (b), and Learning Rates (c).(a) Various training strategies evaluated on the 7B model (LLM-only, ViT-only, joint training); (b) Different task compositions evaluated on the 7B model (reasoning, perception, or both); (c) Effects of learning rates (1e-6, 1.5e-6, 2e-6, 3e-6) on the performance of the 32B model. Each subfigure shows the score progression over training steps. Model Size Backbone Math Science Chart Puzzle OCR DET GND CNT Others Visual-RFT (Liu et al., 2025e) 2B Qwen2-VL - - - - - ✓ ✓ -CLS DeepPerception (Ma et al., 2025a) 2B Qwen2-VL Base∗- - - - - - - - S.QA Vision-R1 (Huang et al., 2025) 7B,72B Qwen2.5-VL ✓ ✓ - - - - - - - R1-Onevision (Yang et al., 2025) 7B Qwen2.5-VL ✓ ✓ ✓ - - - - - V.QA Reason-RFT (Tan et al., 2025) 2B,7B Qwen2-VL ✓- - - - - - ✓S.QA OThink-MR1 (Liu et al., 2025d) 2B,7B Qwen2.5-VL ✓- - - - - - ✓- Perception-R1 (Yu et al., 2025a) 2B,3B Qwen2 &2.5-VL - - - - ✓ ✓ ✓ -- VLM-R1 (Shen et al., 2025) 3B Qwen2.5-VL - - - - -
https://arxiv.org/abs/2505.18129v1
✓ ✓ -- MM-EUREKA (Meng et al., 2025) 7B,32B Qwen2.5-VL ✓- - - - - - - S.QA VL-Rethinker (Wang et al., 2025a) 7B,32B,72B Qwen2.5-VL ✓ ✓ ✓ - - - - - S.QA Orsta (ours) 7B,32B Qwen2.5-VL ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - Table 3|Task-wise Comparison of Public VLM-RL Models with Varying Model Configurations . The evaluated tasks include Math, Science, Chart, Puzzle, OCR, Detection, Grounding, Counting, Classification, Spatial question answering, and Visual question answering. Backbones marked with a star∗indicate models trained from pre-trained backbone (i.e. zero-RL setting), while others are trained from instructed models. models sit closer to the edge of the loss landscape, benefiting from small, stable updates, with 1e-6 offering the best trade-off between convergence speed and final performance on MEGABench. 6. Related Work Adopted from DeepSeek R1 (Guo et al., 2025), recent advancements in visual reinforcement learning (summarized in Table 3) have explored diverse strategies to enhance multimodal reasoning and perception. Models such as Vision-R1 (Huang et al., 2025), LMM-R1 (Peng et al., 2025), R1- 18 V-Triune: Visual Triple Unified Reinforcement Learning OneVision (Yang et al., 2025), VisualThinker-R1-Zero (Zhou et al., 2025), and MM-Eureka (Meng et al., 2025) primarily focus on strengthening reasoning capabilities through instruction tuning or reinforcement learning, leveraging CoT datasets, image-to-text conversion, or rule-based signals to elicit problem-solving behavior. In parallel, models like Visual-RFT (Liu et al., 2025e), R1-V (Chen et al., 2025), Reason-RFT (Tan et al., 2025), and DeepPerception (Ma et al., 2025a) apply task- specific, verifiable reward signals (e.g., IoU, mAP, math_verify) to improve perception tasks such as detection, grounding, and counting. Seg-Zero (Liu et al., 2025b), Perception-R1 (Yu et al., 2025a), and VLM-R1 (Shen et al., 2025) further propose tailored reward functions for segmentation and OCR, though they largely remain within task-specific generalization boundaries. To address broader cross-task generalization, OThink-MR1 (Liu et al., 2025d) incorporates GRPO with Dynamic KL, while VL-Rethinker (Wang et al., 2025b) introduces selective sample replay and forced rethinking to enhance reasoning capabilities. Additionally, several general-purpose VLMs have demonstrated the promise of visual reinforcement learning (Team, 2025; Team et al., 2025a). In contrast to prior efforts that treat reasoning and perception in isolation, our proposed V-Triune unifies both domains under a single reinforcement learning system. 7. Discussion & Future Work In this paper, we propose V-Triune, the first visual triple-unified reinforcement learning system for vision-language models (VLMs) that effectively scales across both reasoning and perception tasks. Unlike previous approaches that are often limited to single-domain or lightweight tasks, V-Triune achievessubstantialperformanceimprovementsacrossabroadspectrumofreal-worldvision-language challenges through its three-tier component design and dynamic IoU-based reward. Notably, V-TriuneyieldsOrsta-3BandOrsta-32B,withimprovementsof3.2%and2.1%overstrong backbones, respectively. Our experiments indicate that reinforcement learning in VLMs primarily serves as an alignment strategy, refining the model’s decision-making and response behavior rather than enabling new knowledge acquisition. This supports the notion that RL fine-tuning enhances the utility and robustness of pre-trained VLMs without altering their foundational capabilities. We also acknowledge several limitations that merit further investigation: •Limited performance scaling in perception tasks: For visual reasoning tasks, we observe clear trends of increasing response length and reflection ratio as training progresses, akin
https://arxiv.org/abs/2505.18129v1
to test-time scaling laws observed in LLMs. However, such trends are not evident in visual perception tasks. The underlying factors driving performance improvements in perception remain unclear. Exploring multi-step RL for perception tasks, similar to approaches in OpenAI’s o3, may offer new insights and avenues for progress. •Underexplored potential of RL-zero in VLMs: There is currently limited research on RL-zero for vision-language models. Nonetheless, we see early indications that RL-zero has the potential to surpass the current limitations imposed by supervised fine-tuning (SFT). Given that multi- modal alignment remains a fundamental challenge in VLMs, and that supervised fine-tuning (SFT) may be inherently limited in addressing this issue, we believe that RL-zero has the potential to redefine the optimization paradigm and unlock new capabilities for vision-language models. The V-Triune system and Orsta models will be publicly available at https://github.com/MiniMax- AI. We hope this work inspires further research into reinforcement learning as a general-purpose training paradigm for vision-language understanding, and encourages exploration of richer reward schemes, advanced reasoning strategies, and task-specific adaptations. 19 V-Triune: Visual Triple Unified Reinforcement Learning Contributions Core Contributions Yan Ma∗1,4, Linge Du∗1,3, Xuyang Shen∗†1,BJunjie Yan1 Contributions Shaoxiang Chen1, Pengfei Li1, Qibing Ren1,2 Advisor Junjie Yan1, Pengfei Liu2,4, Yuchao Dai3, Lizhuang Ma2 Affiliation 1MiniMax 2Shanghai Jiao Tong University 3Northwestern Polytechnical University 4Generative Artificial Intelligence Lab (GAIR) ∗Equal Contribution; †Project Lead;BCorresponding Author 20 V-Triune: Visual Triple Unified Reinforcement Learning References Apache Software Foundation. Apache parquet documentation. https://parquet.apache.org/ docs/, 2025. Accessed: 2025-05-20. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning , pages 41–48, 2009. Ekaterina Borisova and Georg Rehm. Scivqa: Scientific visual question answering. SDProc 2025 , 2025. URL https://sdproc.org/2025/scivqa.html . Jiacheng Chen, Tianhao Liang, Sherman Siu, Zhengqing Wang, Kai Wang, Yubo Wang, Yuansheng Ni, Wang Zhu, Ziyan Jiang, Bohan Lyu, et al. Mega-bench: Scaling multimodal evaluation to over 500 real-world tasks. arXiv preprint arXiv:2410.10563 , 2024. Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V , 2025. Accessed: 2025-02-02. Yew Ken Chia, Vernon Toh Yan Han, Deepanway Ghosal, Lidong Bing, and Soujanya Poria. Puzzlevqa: Diagnosing multimodal reasoning challenges of language models with abstract visual patterns. arXiv preprint arXiv:2403.13315 , 2024. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision , 88:303–338, 2010. Ling Fu, Biao Yang, Zhebin Kuang, Jiajun Song, Yuzhe Li, Linghao Zhu, Qidi Luo, Xinyu Wang, Hao Lu, Mingxin Huang, et al. Ocrbench v2: An improved benchmark for evaluating large multimodal models on visual text localization and reasoning. arXiv preprint arXiv:2501.00321 , 2024.
https://arxiv.org/abs/2505.18129v1
João Gama, Indr ˙e Žliobait ˙e, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. ACM computing surveys (CSUR) , 46(4):1–37, 2014. Deepanway Ghosal, Vernon Toh Yan Han, Yew Ken Chia, and Soujanya Poria. Are language models puzzle prodigies? algorithmic puzzles unveil serious challenges in multimodal reasoning. arXiv preprint arXiv:2403.03864 , 2024. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM , 63 (11):139–144, 2020. Google DeepMind. Gemini model thinking updates: March 2025, 2025. URL https://blog.google/technology/google-deepmind/ gemini-model-thinking-updates-march-2025/ . Accessed: May 11, 2025. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 21 V-Triune: Visual Triple Unified Reinforcement Learning Jian Hu, Xibin Wu, Zilin Zhu, Weixun Wang, Dehao Zhang, Yu Cao, et al. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143 , 2024. Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open- reasoner-zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290 , 2025. Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. HuggingFace. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL https: //github.com/huggingface/open-r1 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, Andre Saraiva, Andrea Vallone, Andrew Duberstein, Andrew Kondrich, Andrey Mishchenko, Andy Applebaum, Angela Jiang, Ashvin Nair, Barret Zoph, Behrooz Ghorbani, Ben Rossen, Benjamin Sokolowsky, Boaz Barak, Bob McGrew, Borys Minaiev, Botao Hao, Bowen Baker, Brandon Houghton, Brandon McKinzie, Brydon Eastman, Camillo Lugaresi, Cary Bassin, Cary Hudson, Chak Ming Li, Charles de Bourcy, Chelsea Voss, Chen Shen, Chong Zhang, Chris Koch, Chris Orsinger, Christopher Hesse, Claudia Fischer, Clive Chan, Dan Roberts, Daniel Kappler, Daniel Levy, Daniel Selsam, David Dohan, David Farhi, David Mely, David Robinson, Dimitris Tsipras, Doug Li, Dragos Oprica, Eben Freeman, Eddie Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Guillaume Leclerc, Hadi Salman, Haiming Bao, Hao Sheng, Hart Andrin, Hessam Bagherinezhad, Hongyu Ren, Hunter Lightman, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian Osband, Ignasi Clavera Gilaberte, and Ilge Akkaya. Openai o1 system card. CoRR, abs/2412.16720, 2024. doi: 10.48550/ARXIV.2412.16720. URLhttps://doi.org/10.48550/arXiv.2412.16720 . Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. InProceedings of the IEEE conference on computer vision and pattern recognition , pages 2901–2910, 2017.
https://arxiv.org/abs/2505.18129v1
Yoonsik Kim, Moonbin Yim, and Ka Yeon Song. Tablevqa-bench: A visual question answering benchmark on multiple table domains. arXiv preprint arXiv:2404.19205 , 2024. Hynek Kydlíček. Math-verify: A library for rule-based verification of mathematical answers, 2025. URLhttps://github.com/huggingface/Math-Verify . Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. Datasets: A community library for natural languageprocessing. In Proceedingsofthe2021ConferenceonEmpiricalMethodsinNaturalLanguage Processing: System Demonstrations , pages 175–184, Online and Punta Cana, Dominican Republic, 22 V-Triune: Visual Triple Unified Reinforcement Learning November 2021. Association for Computational Linguistics. URL https://aclanthology.org/ 2021.emnlp-demo.21 . Aonian Li, Bangwei Gong, Bo Yang, Boji Shan, Chang Liu, Cheng Zhu, Chunhao Zhang, Congchao Guo, Da Chen, Dong Li, et al. Minimax-01: Scaling foundation models with lightning attention. arXiv preprint arXiv:2501.08313 , 2025a. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. Kaixin Li, Ziyang Meng, Hongzhan Lin, Ziyang Luo, Yuchen Tian, Jing Ma, Zhiyong Huang, and Tat-Seng Chua. Screenspot-pro: Gui grounding for professional high-resolution computer use. arXiv preprint arXiv:2504.07981 , 2025b. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13 , pages 740–755. Springer, 2014. Qianchu Liu, Sheng Zhang, Guanghui Qin, Timothy Ossowski, Yu Gu, Ying Jin, Sid Kiblawi, Sam Preston, Mu Wei, Paul Vozila, et al. X-reasoner: Towards generalizable reasoning across modalities and domains. arXiv preprint arXiv:2505.03981 , 2025a. Yuqi Liu, Bohao Peng, Zhisheng Zhong, Zihao Yue, Fanbin Lu, Bei Yu, and Jiaya Jia. Seg-zero: Reasoning-chainguidedsegmentationviacognitivereinforcement. arXivpreprintarXiv:2503.06520 , 2025b. Yuqi Liu, Tianyuan Qu, Zhisheng Zhong, Bohao Peng, Shu Liu, Bei Yu, and Jiaya Jia. Visionreasoner: Unifiedvisualperceptionandreasoningviareinforcementlearning. arXivpreprintarXiv:2505.12081 , 2025c. Zhiyuan Liu, Yuting Zhang, Feng Liu, Changwang Zhang, Ying Sun, and Jun Wang. Othink-mr1: Stimulating multimodal generalized reasoning capabilities via dynamic reinforcement learning. arXiv preprint arXiv:2503.16081 , 2025d. Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025e. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. arXiv preprint arXiv:2105.04165 , 2021. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS) , 2022. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating math
https://arxiv.org/abs/2505.18129v1
reasoning in visual contexts with gpt-4v, bard, and other large multimodal models. CoRR, 2023. Xinyu Ma, Ziyang Ding, Zhicong Luo, Chi Chen, Zonghao Guo, Derek F Wong, Xiaoyi Feng, and Maosong Sun. Deepperception: Advancing r1-like cognitive visual perception in mllms for knowledge-intensive visual grounding. arXiv preprint arXiv:2503.12797 , 2025a. 23 V-Triune: Visual Triple Unified Reinforcement Learning Yan Ma, Steffi Chern, Xuyang Shen, Yiran Zhong, and Pengfei Liu. Rethinking rl scaling for vision language models: A transparent, from-scratch framework and comprehensive evaluation scheme. arXiv preprint arXiv:2504.02587 , 2025b. Ahmed Masry, Mohammed Saidul Islam, Mahir Ahmed, Aayush Bajaj, Firoz Kabir, Aaryaman Kartha, Md Tahmid Rahman Laskar, Mizanur Rahman, Shadikur Rahman, Mehrad Shahmohammadi, et al. Chartqapro: Amorediverseandchallengingbenchmarkforchartquestionanswering. arXivpreprint arXiv:2504.05506 , 2025. Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Tiancheng Han, Botian Shi, Wenhai Wang, Junjun He, et al. Mm-eureka: Exploring the frontiers of multimodal reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2503.07365 , 2025. OpenAI. Introducing openai o3 and o4-mini, April 2025. URL https://openai.com/index/ introducing-o3-and-o4-mini/ . Accessed: 2025-04-18. OpenAI. Thinking with images, 2025. URL https://openai.com/index/ thinking-with-images/ . Accessed: 2025-05-23. Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3170–3180, 2023. YingzhePeng,GongruiZhang,MiaosenZhang,ZhiyuanYou,JieLiu,QipengZhu,KaiYang,Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536 , 2025. Sebastián Ramírez. Fastapi. https://fastapi.tiangolo.com , 2021. URL https://github. com/fastapi/fastapi . FastAPI framework, high performance, easy to learn, fast to code, ready for production. JohnSchulman. Approximatingkldivergence,2020. URL http://joschu.net/blog/kl-approx. html. Accessed: 2025-05-23. Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision , pages 8430–8439, 2019. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. YueqiSong,TianyueOu,YiboKong,ZechengLi,GrahamNeubig,andXiangYue. Visualpuzzles: Decou- pling multimodal reasoning evaluation from domain knowledge. arXiv preprint arXiv:2504.10342 , 2025. 24 V-Triune: Visual Triple Unified Reinforcement Learning Kai Sun, Yushi Bai, Ji Qi, Lei Hou, and Juanzi Li. Mm-math: Advancing multimodal math evaluation with process evaluation and fine-grained classification. arXiv preprint arXiv:2404.05091 , 2024. HuajieTan, YuhengJi, XiaoshuaiHao, MinglanLin, PengweiWang, ZhongyuanWang, andShanghang Zhang. Reason-rft: Reinforcementfine-tuningforvisualreasoning. arXivpreprintarXiv:2503.20752 , 2025. ByteDance Seed Team. Seed1.5-vl technical report. arXiv preprint arXiv:2505.07062 , 2025. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1.
https://arxiv.org/abs/2505.18129v1
5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025a. Kimi Team, Angang Du, Bohong Yin, Bowei Xing, Bowen Qu, Bowen Wang, Cheng Chen, Chenlin Zhang, Chenzhuang Du, Chu Wei, Congcong Wang, Dehao Zhang, Dikang Du, Dongliang Wang, Enming Yuan, Enzhe Lu, Fang Li, Flood Sung, Guangda Wei, Guokun Lai, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haoning Wu, Haotian Yao, Haoyu Lu, Heng Wang, Hongcheng Gao, Huabin Zheng, Jiaming Li, Jianlin Su, Jianzhou Wang, Jiaqi Deng, Jiezhong Qiu, Jin Xie, Jinhong Wang, Jingyuan Liu, Junjie Yan, Kun Ouyang, Liang Chen, Lin Sui, Longhui Yu, Mengfan Dong, Mengnan Dong, Nuo Xu, Pengyu Cheng, Qizheng Gu, Runjie Zhou, Shaowei Liu, Sihan Cao, Tao Yu, Tianhui Song, Tongtong Bai, Wei Song, Weiran He, Weixiao Huang, Weixin Xu, Xiaokun Yuan, Xingcheng Yao, Xingzhe Wu, Xinxing Zu, Xinyu Zhou, Xinyuan Wang, Y. Charles, Yan Zhong, Yang Li, Yangyang Hu, Yanru Chen, Yejie Wang, Yibo Liu, Yibo Miao, Yidao Qin, Yimin Chen, Yiping Bao, Yiqin Wang, Yongsheng Kang, Yuanxin Liu, Yulun Du, Yuxin Wu, Yuzhi Wang, Yuzi Yan, Zaida Zhou, Zhaowei Li, Zhejun Jiang, Zheng Zhang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Zijia Zhao, and Ziwei Chen. Kimi-VL technical report, 2025b. URL https://arxiv.org/abs/2504.07491 . Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025a. Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025b. Xinyu Wang, Yuliang Liu, Chunhua Shen, Chun Chet Ng, Canjie Luo, Lianwen Jin, Chee Seng Chan, Anton van den Hengel, and Liangwei Wang. On the general value of evidence, and bilingual scene-text visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10126–10135, 2020. Lilian Weng. Reward hacking in reinforcement learning. lilianweng.github.io , Nov 2024. URL https://lilianweng.github.io/posts/2024-11-28-reward-hacking/ . Renqiu Xia, Bo Zhang, Hancheng Ye, Xiangchao Yan, Qi Liu, Hongbin Zhou, Zijun Chen, Peng Ye, Min Dou, Botian Shi, et al. Chartx & chartvlm: A versatile benchmark and foundation model for complicated chart reasoning. arXiv preprint arXiv:2402.12185 , 2024. ShijieXia, YiweiQin, XuefengLi, YanMa, Run-ZeFan, SteffiChern, HaoyangZou, FanZhou, Xiangkun Hu, Jiahe Jin, et al. Generative ai act ii: Test time scaling drives cognition engineering. arXiv preprint arXiv:2504.13828 , 2025. Chi Xie, Zhao Zhang, Yixuan Wu, Feng Zhu, Rui Zhao, and Shuang Liang. Described object detection: Liberating object detection with flexible expressions. Advances in Neural Information Processing Systems, 36:79095–79107, 2023. 25 V-Triune: Visual Triple Unified Reinforcement Learning Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615 , 2025. Yiyang Yao, Peng Liu, Tiancheng Zhao, Qianqian Zhang, Jiajia Liao, Chunxin Fang, Kyusong Lee, and Qing Wang. How to evaluate the generalization of detection? a benchmark for comprehensive open-vocabulary detection. arXiv preprint arXiv:2308.13177 , 2023. En Yu, Kangheng Lin, Liang Zhao, Jisheng Yin, Yana Wei, Yuang Peng, Haoran Wei,
https://arxiv.org/abs/2505.18129v1
Jianjian Sun, Chunrui Han, Zheng Ge, et al. Perception-r1: Pioneering perception policy with reinforcement learning. arXiv preprint arXiv:2504.07954 , 2025a. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025b. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal under- standing and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9556–9567, 2024. Kaichen Zhang, Bo Li, Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, Kairui Hu, Shuai Liu, Yuanhan Zhang, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Lmms-eval: Reality check on the evaluation of large multimodal models, 2024. URL https://arxiv.org/abs/2407.12772 . HengguangZhou, XiruiLi, RuochenWang, MinhaoCheng, TianyiZhou, andCho-JuiHsieh. R1-zero’s" aha moment" in visual reasoning on a 2b non-sft model. arXiv preprint arXiv:2503.05132 , 2025. 26
https://arxiv.org/abs/2505.18129v1
VideoGameBench: Can Vision-Language Models complete popular video games? Alex L. Zhang Thomas L. Griffiths Karthik R. Narasimhan Ofir Press Princeton University Abstract Vision-language models (VLMs) have achieved strong results on coding and math benchmarks that are challenging for humans, yet their ability to perform tasks that come naturally to humans–such as perception, spatial navigation, and memory management–remains understudied. Real video games are crafted to be intuitive for humans to learn and master by leveraging innate inductive biases, making them an ideal testbed for evaluating such capabilities in VLMs. To this end, we introduce VideoGameBench, a benchmark consisting of 10 popular video games from the 1990s that VLMs directly interact with in real-time. VideoGameBench challenges models to complete entire games with access to only raw visual inputs and a high-level description of objectives and controls, a significant departure from existing setups that rely on game-specific scaffolding and auxiliary information. We keep three of the games secret to encourage solutions that generalize to unseen environments. Our experiments show that frontier vision-language models struggle to progress beyond the beginning of each game. We find inference latency to be a major limitation of frontier models in the real-time setting; therefore, we introduce VideoGameBench Lite, a setting where the game pauses while waiting for the LM’s next action. The best performing model, Gemini 2.5 Pro, completes only 0.48% of VideoGameBench and 1.6% of VideoGameBench Lite. We hope that the formalization of the human skills mentioned above into this benchmark motivates progress in these research directions. 1 Introduction Language Models (LMs) and vision-language models (VLMs) perform complex tasks remarkably well, even those that are challenging to humans such as advanced mathematics (Azerbayev et al., 2024; Lin et al., 2025) and coding (Li et al., 2022; Luo et al., 2025; OpenAI et al., 2025). However, that does not necessarily mean that they demonstrate human-level performance on all tasks. Humans have perceptual, spatial, and memory management abilities that provide strong inductive biases for learning new tasks (Lake et al., 2017; Dubey et al., 2018). To evaluate whether current AI systems are approaching those abilities, we propose a new challenge: completing video games from the 1990s (also known as the 32-bit era). We introduce VideoGameBench, a benchmark which challenges VLMs to complete, in real-time, a suite of 10 different popular video games from both hand-held consoles (Game Boy and Game Boy Color) and PC (Microsoft DOS). Solving video games relies on fundamental multi-modal reasoning abilities (Shao et al., 2019)—e.g. spatial awareness, memory retention, efficient exploration strategies, and real-time reaction to dynamic events. Video games are carefully crafted to be learnable and playable by humans, catering to human inductive biases (Allen et al., 2024). As a result, they provide an ideal setting for exploring how well agents reproduce those inductive biases (Dubey et al., 2018). Correspondence to altzhang@mit.edu . Code and data at vgbench.com.arXiv:2505.18134v1 [cs.AI] 23 May 2025 Figure 1: VideoGameBench provides an environment for vision-language models (VLMs) to interact with video game emulators – for example, the emulator above is playing The Legend of Zelda: Link’s Awakening . Given information about
https://arxiv.org/abs/2505.18134v1
the game controls and emulator and access to the game’s raw frames, models provide actions in natural language. VideoGameBench has three important novel features: 1.It challenges VLMs with significantly more complex and realistic environments than those found in earlier benchmarks, such as grid-world or text-only games (Paglieri et al., 2025; Nasir et al., 2024), and is one of the first benchmarks to use video games from the 1990s. 2.It evaluates how a single agent performs across different games, including three secret games specifically designed to test generalization to unseen or out-of-distribution scenarios. Unlike previous works (Mnih et al., 2013; Berner et al., 2019; Vinyals et al., 2019; Rubinstein et al., 2025), it challenges agents with environments that they may not have been trained on. 3.It only provides agents with raw game visual inputs, and does not allow game-specific hints, visual overlays or tools (Kempka et al., 2016; Hershey, 2025). Recently, Gemini Plays Pokemon (Joel Z, 2025) showed that a frontier VLM (Google DeepMind, 2025), with tailored tools for pathfinding, game-specific hints, and memory, could complete Pokemon Blue . Although VideoGameBench includes similar games, we focus on evaluating VLMs without human or tool-assisted intervention. We evaluate multiple frontier VLMs on VideoGameBench using our VG-Agent scaffolding and find that all models struggle to progress on any game – the best performing model, Gemini 2.5 Pro (Google DeepMind, 2025), is able to achieve a score of 0.48% on VideoGameBench, which represents the average amount of each game that the agent completes. Our VG-Agent uses ReAct (Yao et al., 2023) with the ability to store information in context (Shinn et al., 2023), and has basic information on the controls and objectives for each game. We also introduce a set of simpler practice games to evaluate skills such as mouse movement and navigation and find that most models perform poorly. Finally, to enable more granular progress tracking, we release VideoGameBench Lite–a smaller benchmark where the emulator pauses during agent inference, eliminating latency issues from slow model responses. Developing scalable benchmarks based on video games requires new tools for measuring how well agents perform in games. As one of our contributions, we developed a novel method for detecting an agent’s game progress: we scrape YouTube walkthroughs of the games in our benchmark, and use the timestamps in their description to make a dataset of (image-frame, milestone) pairs. We then use perceptual hashing (Buchner, 2013; Marr and Hildreth, 1980) on the frames sent to the agent to detect how much of the game it completed. This approach significantly increases the ease of incorporating new tasks into benchmarks for VLMs, opening the door to creating other challenging benchmarks that assess how well agents align with human inductive biases. 2 VideoGameBench VideoGameBench is a benchmark composed of a diverse suite of 23 curated video games split across adevandtest set, with an environment to evaluate and communicate with VLM-based agents. The task is to solve the core objective of each game, e.g. defeating the final boss in Super Mario Land or completing the entire single-player campaign for Age of Empires . 2
https://arxiv.org/abs/2505.18134v1
2.1 Benchmark Construction To enable our benchmark to run games from multiple different platforms in a modern computer environment, our VideoGameBench framework abstracts the underlying game emulator (currently supporting Game Boy via PyBoy (Mads Ynddal, 2025) and MS-DOS games via DOSBox (Veen- stra et al., 2002), JS-DOS (Guryanov, 2025), and Playwright (Microsoft, 2020)) and provides a standardized interface for agents to communicate with. The focus on Game Boy, Game Boy Color, and classic MS-DOS video games was motivated by: 1.2D / 3D Environments: Compared to previous benchmarks (Paglieri et al., 2025; Nasir et al., 2024; Waytowich et al., 2024), the VideoGameBench games introduce more realistic and challenging visuals and also require significant planning (e.g. navigating the world in The Legend of Zelda: Link’s Awakening ), puzzle-solving (e.g. understanding physics in The Incredible Machine ), and strategic thinking (e.g. dealing with multiple enemies shooting in Doom II ). 2.Diverse Mechanics: Our game selection covers both controller-based mechanics (Game Boy) and mouse/keyboard interactions (MS-DOS), presenting different challenges for agent control. For example, mouse control requires precisely mapping points on a fixed screen, while controller D-Pad (the flat, four-way directional button on a controller) movement requires understanding positions relative to the game character. 2.2 Task Formulation In VideoGameBench, VLM agents are provided instructions on emitting game actions (e.g. press “A” to jump), and the raw game screen without extra information as input, and provide an action as output every step. We do not permit human-written hints beyond high-level game objectives and a description of game controls (see Appendix B). Agent inputs. The primary observation provided to the agent at every step is the raw game screen as an image (or set of frames). We deliberately avoid providing parsed text overlays or structured state information (as in previous works including PySC2 (Vinyals et al., 2017) and VizDoom (Kempka et al., 2016)) to challenge the model’s visual processing capabilities. In addition, we do not provide the agent with any intermediate rewards or signals throughout gameplay. Agent actions. The agent interacts with the game controller through a language-based action interface. This supports single actions (e.g., press “Space”), sequences of actions (e.g., press “A”, “A”, “Start”), or sequences of timed actions (e.g., hold “Up” for 2 seconds). For example, the agent outputs “press_key space ” to press the space bar for a default of 1/10a second. The specific available actions depend on the game platform (keyboard/mouse for MS-DOS, button presses for Game Boy). Scoring. Each game is weighted equally (out of 1) and is divided uniformly by the number of predefined checkpoints on each game. Checkpoints were manually defined based on well-known level or stage divisions for these games (see Appendix C.4). Furthermore, we specifically allow agents to restart games after losing to allow agents to adjust their strategies. To prevent running infinitely long trajectories, we provide a list of formal specifications to determine if a run ends by capping the overall play-time per game (see Appendix A). VideoGameBench Rules. There are many possible approaches to playing video games using a VLM agent. For example, one could allow engineering
https://arxiv.org/abs/2505.18134v1
visual overlays on top of the raw game screen to assist the agent. In addition, a benchmark designer could allow providing the agent with tools that access internal game-state information such as the game emulator’s internal RAM. On Pokemon Blue , Gemini Plays Pokemon (Joel Z, 2025) both uses an overlay that marks on-screen the navigable tiles, and stores a mini-map of previously explored areas that is extracted from the game’s RAM. On VideoGameBench, we want to evaluate a VLM agent’s ability to interpret raw visual inputs and directly interact with the game environments to achieve its goals. Therefore, we do not permit any visual overlays, and also do not permit providing any extra information to the agent sourced either from the internal game-state or from the viewable screen. We only allow the raw game frames and a basic description of the game’s objectives and controls as input to the agent. 3 2.3 The Games of VideoGameBench VideoGameBench includes 10 games in the test set, and 13 in the devset, described in Table 1. Previous video game benchmarks for VLMs such as Paglieri et al. (2025); Nasir et al. (2024); Tsai et al. (2025) all include six or less games, so we limit our test set to 10games due to the difficulty, length, and diversity of each game. Seven of these are public and three are secret (see §2.4). Table 1: VideoGameBench games and their properties. Three of the games in our test set are kept secret. The right-most column indicates if the video game is “real-time”, i.e. it requires real-time actions. ( ✓) means it requires real-time responses. ( ✗) indicates that it does not require real-time responses. The Lite subset effectively removes this constraint. DX indicates the colored version of an originally Game Boy game. Game Genre Dim. Platform Real-time?Development SetQuake FPS 3D3D3D MS-DOS ✓ Prince of Persia Platformer/Action 2D MS-DOS ✓ Super Mario Land Platformer 2D Game Boy ✓ Doom FPS 3D3D3D MS-DOS ✓ Warcraft II Strategy (Real-time) 2D MS-DOS ✗ The Oregon Trail Deluxe Simulation/Educational 2D MS-DOS ✓ X-COM: UFO Defense Strategy (TBS/Tactical) 2D MS-DOS ✓ Scooby-Doo: Classic Creep Capers Adventure 2D GB Color ✓ Age of Empires Strategy (Real-time) 2D MS-DOS ✓ Pokémon Red RPG 2D Game Boy ✗ Castlevania: The Adventure Platformer/Action 2D Game Boy ✓ Donkey Kong Land 2 Platformer 2D Game Boy ✓ Mega Man: Dr. Wily’s Revenge Platformer/Action 2D Game Boy ✓Test SetDoom II FPS 3D3D3D MS-DOS ✓ Kirby’s Dream Land (DX) Platformer 2D Game Boy ✓ Zelda: Link’s Awakening (DX) Action-Adventure/RPG 2D Game Boy ✓ Sid Meier’s Civilization Strategy (Turn-based) 2D MS-DOS ✗ The Need for Speed Racing 3D3D3D MS-DOS ✓ The Incredible Machine Puzzle 2D MS-DOS ✗ Pokémon Crystal RPG 2D GB Color ✗ Secret Game #1 ? ? ? ? Secret Game #2 ? ? ? ? Secret Game #3 ? ? ? ? Game Genres. The VideoGameBench games were selected based on popular genres of video games, each offering unique and unique challenges. We highlight challenges we observed for agents below: •First-person shooter (FPS) : Agents must traverse 3D
https://arxiv.org/abs/2505.18134v1
environments, aim at enemies and dodge enemy attacks. •Platformer : Agents must move a character under 2D physics mechanics to reach a goal, all while fighting and avoiding obstacles. •Action-Adventure / RPG : Agents must reason over complex strategies to defeat or ally with enemies, and must keep track of long-horizon game state information such as their own resources and the current objectives. •Racing : Agents must continuously adjust and react in a 3D vehicle environment and race against other opponents. •Puzzle : Agents must solve a series of puzzles based on in-game physics mechanics and tools. 2.4 Secret Games Our objective is not to measure whether agents can play specific games, but whether they can adapt to, learn from, and progress in new environments. Therefore, we also test each agent on three secret games that we host on a private evaluation server. We determined that these games are of similar difficulty to the other games in VideoGameBench, each falling under one of the genres described in § 2.3. We believe that including these games could incentivize agent researchers to focus on systems that can generalize to new environments, rather than focusing on narrowly-developed agents that can only play a limited, pre-determined set of games. 4 2.5 VideoGameBench Lite: Giving Agents Time to Think A major bottleneck for current VLMs in real-time games is inference latency. We observe that VLMs take so much time to respond that by the time they return an action to perform, the game state has already substantially changed, so the action choice the agent made is now stale. When using an agent that processes multiple historical frames along with the current game image, along with a history of previous actions and thoughts, processing times increase, further exacerbating this issue. To disentangle reasoning ability from reaction time constraints, we introduce VideoGameBench Lite, a variant of the full benchmark where the underlying emulator is paused whenever the agent is processing information and deciding on its next action – the game only resumes momentarily when an action is made. This change effectively turns the real-time game into a turn-based interaction, allowing evaluation of the agent’s planning and decision-making capabilities irrespective of inference speed. The list of VideoGameBench Lite games is: Quake ,Prince of Persia ,Super Mario Land from thedevset, and Doom II ,Kirby’s Dream Land ,Zelda: Link’s Awakening from the test set. 2.6 Automated Progress Tracking To make VideoGameBench finer-grained, and not just one in which we assign binary pass/fail scores to each game, we introduce an automated progress tracking mechanism to detect what levels or stages of the game the agent managed to complete in each run. To do this, we introduce a novel method for tracking progress: we first scrape game screen checkpoint images (e.g. the end of a level) from YouTube video walkthroughs of each game – specifically walkthroughs that have timestamp pointers for each level in their descriptions (see Appendix C.3). We show the distribution of these checkpoints throughout each game in Appendix C.6. Figure 2: To track progress on VideoGameBench, we scrape deterministic checkpoints from
https://arxiv.org/abs/2505.18134v1
online walkthroughs of video games and compute image hashes using (Buchner, 2013). These hashes are compared to the current game screen to determine if an agent has reached a checkpoint, and the score is determined based on the relative timestamp of the checkpoint with respect to the walkthrough. We automatically detect which checkpoint an agent has reached by applying perceptual image hashing (Marr and Hildreth, 1980; Buchner, 2013) to every frame outputted by the emulator during the agent’s gameplay. We compute the hamming distance (Hamming, 1950) between the current game frame and all of the perceptual hashes of the scraped checkpoint frames and check if they have a hamming distance below a certain threshold (see Appendix C.5) to determine if an agent has reached a certain checkpoint of the game (see Figure 2). To compute the agent’s VideoGameBench score for a specific game, we track the furthest checkpoint reached by the agent, lookup the timestamp in the associated walkthrough video, and compute the percentage of the game completed. See Appendix C.4 for a detailed list of these progress checkpoint images. 3 Experimental Setup In this section, we describe the inputs provided to agents running on VideoGameBench. We also explain our agent baselines that we use to evaluate the current performance on this benchmark. 5 VG-Agent. Because the games in VideoGameBench require context that is often not visible on-screen, the naïve approach of only providing a VLM the current game screen would fail. Therefore, we implement a ReACT (Yao et al., 2023) agent to play video games. This agent is given (1) an initial prompt with basic information about the game objectives as our baseline evaluation. The agent is given a single frame after half a second of taking an action, and each action lasts for a fraction of a second (e.g. “press_key A” presses the “A” key for 0.1seconds), (2) an initial prompt with explicit game control instructions such as what the “A” button does, and (3) a prompt that asks the model to output which information it wants to store as a textual scratchpad (memory) after every step. We provide the last few frames and agent observations in context. Models. VideoGameBench requires interpreting raw game screenshots and producing structured outputs in language while also understanding the game. We evaluate exclu- sively on VLM models, namely GPT-4o ( gpt-4o-2024-08-06 ) (OpenAI et al., 2024), Claude Sonnet 3.7 ( claude-3-7-sonnet-20250219 ) (Anthropic, 2025), and Gemini 2.5 Pro (gemini-2.5-pro-preview-03-25 ) (Google DeepMind, 2025), three of the leading closed-source VLMs. We also evaluate Gemini 2.0 Flash ( gemini-2.0-flash ) (Google, 2025) for its fast in- ference and Llama 4 Maverick ( Llama-4-Maverick-17B-128E-Instruct-FP8 ) (Meta AI, 2025; Together AI, 2025) to include an open-source model. Following Yao et al. (2023), all models are run with temperature 0.7and a maximum output length of 1024 tokens (except for the Gemini models, which we set to 2048 tokens due to them frequently outputting long reflections). Constraints. There are many games in which the model could run infinitely long because it gets stuck without ending the game. Given a limited
https://arxiv.org/abs/2505.18134v1
budget, we preemptively end runs based on rules indicating insufficient progress toward the next checkpoint (see Appendix D.1). Ensuring the validity of our interface. To validate the completeness of our interface, we had a human player complete the first levels of a few games using only the same information available to the language model. For details on the human interface, see Appendix C.2. VG-Agent on VideoGameBench Lite. The setup on VideoGameBench Lite modifies the simulator to pause while the agent processes the input to generate an action, similar to the OpenAI Gym environment (Brockman et al., 2016). For our experiments, at every step on the Game Boy emulator we provide the agent with the most recent game frame, while on the DOS emulator we provide it with five past frames spaced 0.1 seconds apart. Other than this, we evaluate all models with the same settings as VideoGameBench, including the same game-specific system prompts. 4 Results We evaluate VG-Agent on the test split of VideoGameBench and VideoGameBench Lite across different frontier VLMs in Table 2, then provide analysis on model’s performance and their failure modes. The best-performing model, Gemini 2.5 Pro, is only able to reach the first checkpoint in a single game. Due to budget constraints, we limit each model to one run on each game. Our study of system variance (on three games using Gemini 2.5 Pro) shows that there is little variance in these settings; therefore, we believe that running these experiments multiple times would not have led to substantially different results (see Appendix D.3). 4.1 Benchmark Performance In Table 2, we show similar trends among all models, which struggle to make progress in any game – Gemini 2.5 Pro achieves the highest score of 0.48% on VideoGameBench. Readers might wonder why Gemini 2.5 Pro struggles to complete even the first checkpoint in Pokemon Crystal , yet it was able to complete a very similar game, Pokemon Blue , inGemini Plays Pokemon (Joel Z, 2025). This can be attributed to the stricter ruleset imposed in the VideoGameBench setting. In Joel Z (2025), the agent scaffolding uses several special-built tools and overlays designed to assist the agent in 2D navigation and memory management. Furthermore, its scaffolding and prompting had been updated multiple times throughout the run, even sometimes being given direct instructions (e.g.“stay on floor B3F”). Meanwhile, on VideoGameBench and VideoGameBench Lite, we explicitly disallow access to any information other than raw visual frames, including data sourced from the game state. We disallow placing visual overlays on the screen, such as maps of previously visited locations, and prohibit any form of human intervention while the agent plays the game. 6 Table 2: Performance on VideoGameBench test split, a benchmark consisting of 7 games and 3 secret games, which we keep private to users. Each score is displayed as as a percentage of the game completed based on completed checkpoints, i.e. 0%means the agent did not reach the first checkpoint. The overall score is computed as an average of scores across all games. The cost of each run is reported in Appendix
https://arxiv.org/abs/2505.18134v1
D.2. VideoGameBench GPT-4o Sonnet 3.7 Gemini 2.5 Pro LLaMA 4 Gemini 2.0 Flash Civilization I 0% 0% 0% 0% 0% The Need for Speed 0% 0% 0% 0% 0% The Incredible Machine 0% 0% 0% 0% 0% Pokemon Crystal 0.9% 0% 0% 0% 0% Doom II 0% 0% 0% 0% 0% Kirby’s Dream Land (DX) 0% 0% 4.8% 0% 0% Link’s Awakening (DX) 0% 0% 0% 0% 0% Secret Game #1 0% 0% 0% 0% 0% Secret Game #2 0% 0% 0% 0% 0% Secret Game #3 0% 0% 0% 0% 0% Overall Score 0.09% 0% 0.48% 0% 0% Table 3: Scores on the VideoGameBench Lite test split, a benchmark consisting of three games where the environment pauses while the agent is thinking. Each score is displayed as a percentage. 100% indicates a completed game. VideoGameBench Lite GPT-4o Sonnet 3.7 Gemini 2.5 Pro LLaMA 4 Gemini 2.0 Flash Doom II 0% 0% 0% 0% 0% Kirby’s Dream Land 4.8% 4.8% 4.8% 0% 0% Link’s Awakening (DX) 0% 0% 0% 0% 0% Overall Score 1.6% 1.6% 1.6% 0% 0% Ablating latency through VideoGameBench Lite. In Table 3, we evaluate VG-Agent on the VideoGameBench Lite subset of games. Noticeably, every model performs considerably better both quantitatively and qualitatively on VideoGameBench Lite, even on games like Doom II where the models do not reach the first checkpoint. Nevertheless, the low overall performance suggests that even ignoring inference latency, VLM agents struggle to reason effectively over video game environments. Ablating game complexity with our practice game suite. To investigate the large performance gap of VLM models on VideoGameBench, we devised three simple PC games. All games in VideoGameBench require visually identifying objects and their locations on screen, and many require clicking and dragging to perform tasks. Our Location Clicking Game is a game where players must click on 10different markers that appear one at a time, at different locations on the screen. Finally, theMouse Dragging Game consists of 10different levels where the player must drag a marker along a path shown on-screen. The 2D Navigation game is a grid-world setting where players use the arrow keys to move a marker through 10different mazes. See Appendix F for screenshots of these games. Table 4: Performance of frontier VLMs using the VG-Agent scaffold in three practice games we constructed, scored by how many targets (out of 10) each model could interact with in 250 actions. In the Clicking Game , the model must click 10 circles. In the Dragging Game , the model must drag 10 circles in a desired path. In the 2D Navigation Game , the model must move a tile through 10 different grid mazes. Game GPT-4o Sonnet 3.7 Gemini 2.5 Pro LLaMA 4 Gemini 2.0 Flash Location Clicking 30% 100% 100% 10% 0% Mouse Dragging 0% 10% 0% 0% 0% 2D Navigation 30% 80% 50% 20% 60% In Table 4, we find that Claude Sonnet 3.7 and Gemini 2.5 Pro are able to complete the Location Clicking Game , but struggle to complete the other games, while the other models struggle to complete all
https://arxiv.org/abs/2505.18134v1
of the practice games. We tested our agent interface by having a co-author play the practice games through it, meaning that they would receive the same frames that an agent is provided at every step, 7 and they then would have to issue commands in the same format as an agent. We were able to use this interface to complete all of our practice games, suggesting that the VLMs were not able to complete those games because of weaknesses in the VLMs and not because of weaknesses in our interface. 4.2 Qualitative Performance Analysis In this section, we address the low scores of frontier VLMs on VideoGameBench and VideoGameBench Lite by qualitatively analyzing game trajectories. Solving VideoGameBench games requires narrowing the Knowing-doing Gap. InThe Legend of Zelda: Link’s Awakening , all models observe that the door to leave the starter room is at the bottom of the screen, but will repeatedly press “down” irrespective of where the controlled character is in the room. We observe this type of behavior in several other games, which we highlight in Appendix E.1. Notably, in Paglieri et al. (2025), the authors observe a similar failure mode which they call the “knowing-doing gap”, where a model knows what needs to be done at a given point but it carries out incorrect actions, underscoring a disconnect between knowledge and execution. Frontier models struggle to correctly process visual inputs. In VideoGameBench, agents are only given access to the raw visual frames provided by the game emulator. We find that incorrectly processing the given input frames frequently leads to illogical behavior that can fatally compromise the run, such as the Gemini 2.0 Flash agent in Doom II wasting all its ammo on a dead enemy it misperceives as alive, or the GPT-4o agent in The Legend of Zelda: Link’s Awakening mistakenly believing it had already spoken to an NPC because it had seen itself standing next to the NPC in a previous frame, despite no interaction (see Appendix E.2 for more details). VG-Agent lacks planning and memory management abilities. We frequently observe frontier models struggling to track game state information and objectives, similar to findings in other visual game benchmarks like (Paglieri et al., 2025; Hu et al., 2025). These limitations lead to incorrect decision making or repeated action loops so agents cannot progress further in the game – for example, inDoom II , VG-Agent with GPT-4o overwrites its textual scratchpad memory after reaching a new zone, deleting information on where its been before, and thus repeatedly traveling back and forth between the same zones (see Appendix E.3). The effect of agent scaffolds on video game performance. We find that the performance of all frontier models using VG-Agent on Pokemon Crystal is much worse than both the Claude Plays Pokemon (Hershey, 2025) and Gemini Plays Pokemon (Joel Z, 2025) agents on Pokemon Red and Pokemon Blue respectively, both of which are very similar to Pokemon Crystal in VideoGameBench. We believe that this performance gap can mainly be attributed to the additional information, tools, and scaffolding provided to these agents, which
https://arxiv.org/abs/2505.18134v1
may negate many of the limitations we observed. For VideoGameBench, we enforce strict rules regarding external information, tool use, and agent scaffolds to better focus on evaluating the capabilities of unassisted VLMs. 5 Related Works Game environments have long served as a testbed (Mnih et al., 2013) for deep learning agents, beginning with reinforcement learning approaches (Shao et al., 2019) and more recently extend- ing to vision-language models that leverage multimodal inputs to interact with and reason about gameplay (Jin et al., 2024; Paglieri et al., 2025). 5.1 Reinforcement Learning to Play Games Reinforcement Learning (RL) in has achieved significant success in game environments. Early work demonstrated RL agents capable of mastering Atari games from pixel inputs (Mnih et al., 2015). More advanced RL systems have achieved superhuman performance in complex board games like Go (Silver et al., 2016a) and real-time strategy games like StarCraft II (Vinyals et al., 2019). Even complex team-based games like Dota 2 have seen RL agents (Berner et al., 2019) beat professional players. These successes often rely on custom game APIs (Kempka et al., 2016; Vinyals et al., 2017), sophisticated reward shaping (Ma et al., 2024), simulated game trajectories (Ha and Schmidhuber, 8 2018; Hafner et al., 2023; Valevski et al., 2024), self-play (Silver et al., 2017), or learning from human data (Silver et al., 2016b; Berner et al., 2019; Vinyals et al., 2019). In 3D environments like Minecraft (Fan et al., 2022), model-based RL approaches such as Dreamer have shown promise in learning complex behaviors from visual inputs (Hafner et al., 2023). Recently, hybrid approaches combining LMs with RL techniques have emerged (Yao et al., 2020), particularly for games with significant linguistic components. For instance, agents combining language models with RL policies have achieved human-competitive performance in text-based battle simulators like Pokemon Showdown (Jin et al., 2024). However, traditional RL often struggles with extremely sparse rewards (Mnih et al., 2015), long-horizon tasks requiring complex reasoning or common sense (Küttler et al., 2020; Yao et al., 2020), and efficiently utilizing prior knowledge (Zhang et al., 2024), areas where foundation models might offer advantages. 5.2 VLMs and LMs as Game Playing Agents VLMs and LMs offer a different paradigm for game playing, leveraging their pre-trained knowl- edge and reasoning capabilities to interpret visual scenes and text, formulate plans, and generate actions, often with minimal or no game-specific training (Tan et al., 2024; Ruoss et al., 2025). This reasoning paradigm is more akin to humans, which have been shown to heavily use priors for solving games (Dubey et al., 2018). Early work explored LMs in text-based adventure games, demonstrating their potential for planning and interaction based on textual descriptions (Tuyls et al., 2022; Yao et al., 2022, 2023). Many benchmarks have also been developed to evaluate VLM agent capabilities such as visual-language understanding (Guan et al., 2024), 2D/3D motion planning (Gan et al., 2021; Nasir et al., 2024) More recently, efforts have focused on applying VLMs to visually rich games. Some approaches simplify the environment, using object detectors or textual summaries (Yuan et al., 2023). Others attempt direct interaction
https://arxiv.org/abs/2505.18134v1
with game interfaces, such as projects demonstrating models playing Pokemon based on screen captures (Rubinstein et al., 2025; Hershey, 2025). Agents have also been developed for games like Mario and Sokoban using VLM-based reasoning (Wu et al., 2023; Tan et al., 2024). Another line of work includes LMs playing games that are traditionally hard for RL due to complexity or language elements (Paglieri et al., 2025). Despite this progress, evaluating VLMs on complete, complex, real-time video games remains an open challenge. Existing benchmarks often focus on simplified environments Gan et al. (2021); Nasir et al. (2024), short tasks (Shridhar et al., 2021; Hu et al., 2025), or lack the real-time interaction constraints of many popular games (Ruoss et al., 2025; Paglieri et al., 2025). 6 Discussion Limitations. VideoGameBench covers primarily games and game emulators from the 1990s, which we hope to expand to more games and platforms. The automatic checkpoint detection we introduced works for measuring game progress, but is not able to provide extremely fine-grained or continuous markers. Lastly, we recognize that our wide range of mechanisms for preventing data leakage (e.g. adevandtest split of games, secret games, and strict scaffolding rules) could still be bypassed intentionally or unintentionally, since information, guides and video walkthroughs about virtually all video games are widely available online and might make its way to pre-training or finetuning corpora. Broader Impact. VideoGameBench evaluates whether AI systems can operate in new environments. Although we focus on the virtual game setting, improvements on VideoGameBench may translate into progress in real-world applications, including controlling autonomous robots and drones. We believe that evaluating and understanding agent capabilities in this virtual setting will be important for monitoring the potential for harmful real-world behavior in future systems. Conclusion. Video games offer a uniquely rich and underexplored environment for evaluating model abilities: they require spatial reasoning, long-term memory, fast perception-action loops, and the ability to generalize across diverse scenarios. VideoGameBench leverages this setting to test whether vision-language models can exhibit the kinds of inductive biases humans use to complete novel tasks. We hope that this benchmark and future contributions will serve to build autonomous agents that are trustworthy, robust, and able to generalize to new tasks. 9 References Kelsey Allen, Franziska Brändle, Matthew Botvinick, Judith E Fan, Samuel J Gershman, Alison Gopnik, Thomas L Griffiths, Joshua K Hartshorne, Tobias U Hauser, Mark K Ho, et al. Using games to understand the mind. Nature human behaviour , 8(6):1035–1043, 2024. Anthropic. Claude 3.7 sonnet system card, February 2025. URL https://assets.anthropic. com/m/785e231869ea8b3b/original/claude-3-7-sonnet-system-card.pdf . Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics, 2024. URL https://arxiv.org/abs/2310.10631 . Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Piotr D˛ ebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 , 2019. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. URL https://arxiv.org/abs/1606.01540 . Johannes Buchner. imagehash: A
https://arxiv.org/abs/2505.18134v1
Python Perceptual Image Hashing Module , 2013. URL https: //github.com/JohannesBuchner/imagehash . Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, and Alexei A. Efros. Investigating human priors for playing video games, 2018. URL https://arxiv.org/abs/1802.10217 . Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge, 2022. URL https://arxiv.org/abs/2206.08853 . Chuang Gan, Siyuan Zhou, Jeremy Schwartz, Seth Alter, Abhishek Bhandwaldar, Dan Gutfreund, Daniel L. K. Yamins, James J DiCarlo, Josh McDermott, Antonio Torralba, and Joshua B. Tenen- baum. The threedworld transport challenge: A visually guided task-and-motion planning bench- mark for physically realistic embodied ai, 2021. URL https://arxiv.org/abs/2103.14025 . Google. Gemini 2.0: Flash, flash-lite and pro. https://developers.googleblog.com/en/ gemini-2-family-expands/ , February 2025. Google DeepMind. Gemini 2.5: Our newest gemini model with thinking, March 2025. URL https://blog.google/technology/google-deepmind/ gemini-model-thinking-updates-march-2025/ . Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallusionbench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision- language models, 2024. URL https://arxiv.org/abs/2310.14566 . Alexander Guryanov. JS-DOS. https://js-dos.com/ , 2025. Version: js-dos-v8. David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/ 2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf . Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. In Advances in Neural Information Processing Systems , 2023. R. W. Hamming. Error detecting and error correcting codes. Bell System Technical Journal , 29(2):147–160, 1950. doi: https://doi.org/10.1002/j.1538-7305.1950.tb00463.x. URL https: //onlinelibrary.wiley.com/doi/abs/10.1002/j.1538-7305.1950.tb00463.x . David Hershey. Claude Plays Pokemon Twitch Stream. https://www.twitch.tv/ claudeplayspokemon , 2025. 10 Lanxiang Hu, Qiyu Li, Anze Xie, Nan Jiang, Ion Stoica, Haojian Jin, and Hao Zhang. Gamearena: Evaluating llm reasoning through live computer games, 2025. URL https://arxiv.org/abs/ 2412.06394 . Zhiyuan Jin, Tianyu Cui, Hao Zhang, Zihan Chen, Yang Liu, Hao Wang, Yuandong Cheng, and Chi Jin. PokeChamp: Simulation-based reinforcement learning for pokemon battles. arXiv preprint arXiv:2503.04094 , 2024. Joel Z. Gemini Plays Pokemon Twitch Stream. https://www.twitch.tv/gemini_plays_ pokemon , 2025. Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja ´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning, 2016. URL https://arxiv. org/abs/1605.02097 . Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The nethack learning environment, 2020. URL https: //arxiv.org/abs/2006.13760 . Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences , 40:e253, 2017. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with alphacode.
https://arxiv.org/abs/2505.18134v1
Science , 378(6624):1092–1097, December 2022. ISSN 1095-9203. doi: 10.1126/science.abq1158. URL http://dx.doi.org/10.1126/science.abq1158 . Yong Lin, Shange Tang, Bohan Lyu, Jiayun Wu, Hongzhou Lin, Kaiyu Yang, Jia Li, Mengzhou Xia, Danqi Chen, Sanjeev Arora, and Chi Jin. Goedel-prover: A frontier model for open-source automated theorem proving, 2025. URL https://arxiv.org/abs/2502.07640 . Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Maurice Weber Colin Cai, Ce Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepcoder: A fully open-source 14b coder at o3-mini level. https://www.together.ai/blog/ deepcoder , 2025. TogetherAI Blog. Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Eureka: Human-level reward design via coding large language models, 2024. URL https://arxiv.org/abs/2310.12931 . Mads Ynddal. PyBoy. https://github.com/Baekalfen/PyBoy , 2025. David Marr and Ellen Hildreth. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences , 207(1167):187–217, 1980. doi: 10.1098/rspb.1980.0020. URL https://royalsocietypublishing.org/doi/10.1098/rspb.1980.0020 . Meta AI. The llama 4 herd: The beginning of a new era of natively multimodal ai innovation. https://ai.meta.com/blog/llama-4-multimodal-intelligence/ , April 2025. Microsoft. Playwright. https://playwright.dev/ , 2020. V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning, 2013. URL https://arxiv.org/abs/1312.5602 . V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Belle- mare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning, 2015. 11 Muhammad Umair Nasir, Steven James, and Julian Togelius. Gametraversalbenchmark: Evaluating planning abilities of large language models through traversing 2d game maps, 2024. URL https: //arxiv.org/abs/2410.07765 . OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker- Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David
https://arxiv.org/abs/2505.18134v1
Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory 12 Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara
https://arxiv.org/abs/2505.18134v1
Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. Gpt-4o system card, 2024. URL https://arxiv.org/abs/2410.21276 . OpenAI, Ahmed El-Kishky, Alexander Wei, Andre Saraiva, Borys Minaiev, Daniel Selsam, David Do- han, Francis Song, Hunter Lightman, Ignasi Clavera, Jakub Pachocki, Jerry Tworek, Lorenz Kuhn, Lukasz Kaiser, Mark Chen, Max Schwarzer, Mostafa Rohaninejad, Nat McAleese, o3 contributors, Oleg Mürk, Rhythm Garg, Rui Shu, Szymon Sidor, Vineet Kosaraju, and Wenda Zhou. Competitive programming with large reasoning models, 2025. URL https://arxiv.org/abs/2502.06807 . Davide Paglieri, Bartłomiej Cupiał, Samuel Coward, Ulyana Piterbarg, Maciej Wolczyk, Akbir Khan, Eduardo Pignatelli, Łukasz Kuci ´nski, Lerrel Pinto, Rob Fergus, Jakob Nicolaus Foerster, Jack Parker-Holder, and Tim Rocktäschel. Balrog: Benchmarking agentic llm and vlm reasoning on games, 2025. URL https://arxiv.org/abs/2411.13543 . David Rubinstein, Keelan Donovan, Daniel Addis, Kyoung Whan Choe, Joseph Suarez, and Peter Whidden. Pokerl. https://drubinstein.github.io/pokerl/ , 2025. Anian Ruoss, Fabio Pardo, Harris Chan, Bonnie Li, V olodymyr Mnih, and Tim Genewein. Lmact: A benchmark for in-context imitation learning with long multimodal demonstrations, 2025. URL https://arxiv.org/abs/2412.01441 . Kun Shao, Zhentao Tang, Yuanheng Zhu, Nannan Li, and Dongbin Zhao. A survey of deep reinforcement learning in video games, 2019. URL https://arxiv.org/abs/1912.10944 . Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. URL https://arxiv.org/abs/2303.11366 . Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning, 2021. URL https://arxiv.org/abs/2010.03768 . David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016a. David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature , 529:484–503, 2016b. URL http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html . David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Si- monyan, and Demis Hassabis. Mastering chess and shogi by self-play with a general reinforcement learning algorithm, 2017. URL https://arxiv.org/abs/1712.01815 . Weihao Tan, Ziluo Ding, Wentao Zhang, Boyu
https://arxiv.org/abs/2505.18134v1
Li, Bohan Zhou, Junpeng Yue, Haochong Xia, Jiechuan Jiang, Longtao Zheng, Xinrun Xu, Yifei Bi, Pengjie Gu, Xinrun Wang, Börje F. Karlsson, Bo An, and Zongqing Lu. Towards general computer control: A multimodal agent for red dead redemption II as a case study. In ICLR 2024 Workshop on Large Language Model (LLM) Agents , 2024. URL https://openreview.net/forum?id=pmcFzuUxsP . 13 Together AI. Together ai partners with meta to offer llama 4: Sota multimodal moe models. https: //www.together.ai/blog/llama-4 , April 2025. Chen Feng Tsai, Xiaochen Zhou, Sierra S. Liu, Jing Li, Mo Yu, and Hongyuan Mei. Can large language models play text games well? current state-of-the-art and open questions, 2025. URL https://arxiv.org/abs/2304.02868 . Jens Tuyls, Shunyu Yao, Sham Kakade, and Karthik Narasimhan. Multi-stage episodic control for strategic exploration in text games, 2022. URL https://arxiv.org/abs/2201.01251 . Dani Valevski, Yaniv Leviathan, Moab Arar, and Shlomi Fruchter. Diffusion models are real-time game engines, 2024. URL https://arxiv.org/abs/2408.14837 . Peter Veenstra, Sjoerd van der Berg, and The DOSBOX Team. DOSBox. https://www.dosbox. com/ , 2002. Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander S Vezhnevets, Malcolm Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Demis Horgan, Igor Babuschkin, Tom Schaul, Hexiang Wang, Timothy Lillicrap, and Demis Hassabis. PySC2: StarCraft II learning environment, 2017. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature , 575(7782):350–354, 2019. Nicholas R. Waytowich, Devin White, MD Sunbeam, and Vinicius G. Goecks. Atari-gpt: Bench- marking multimodal large language models as low-level policies in atari games, 2024. URL https://arxiv.org/abs/2408.15950 . Hao Wu, Xueyang Chen, Runlong Wang, Zicheng Yuan, Licheng Wang, Zhenggang Zhang, Jie Zhou, and Joseph J Lim. Visual programming for compositional reasoning and action planning in autonomous agents. arXiv preprint arXiv:2310.00425 , 2023. Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. Keep calm and explore: Language models for action generation in text-based games, 2020. URL https://arxiv.org/ abs/2010.02903 . Shunyu Yao, Howard Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. WebShop: Towards scalable real-world web interaction with grounded language agents. In Advances in Neural Information Processing Systems , 2022. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. ICLR , 2023. Zeyi Yuan, Runsheng Jiang, Jinpeng Chen, Ziyu Wang, Chen Chen, Ziquan Wu, Zhongyue Wu, Chengqi Li, Xinrun Yang, Fangkai Zhao, et al. Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144 , 2023. Alex Zhang, Khanh Nguyen, Jens Tuyls, Albert Lin, and Karthik Narasimhan. Language-guided world models: A model-based approach to ai control, 2024. URL https://arxiv.org/abs/ 2402.01695 . 14 A VideoGameBench Rules In this section, we provide more details on the rules used for experiments run in Table 2 and Table 3. Firstly, we allow each agent to be prompted with basic information about the game setting and the game controls.
https://arxiv.org/abs/2505.18134v1
Secondly, the game environment is only allowed to provide the raw game frames as input. Finally, we define a set of rules for determining when a run is over on VideoGameBench. We specifically allow runs to continue even after an agent has failed / “game over”d multiple times, and also note that some games do not have an end state other than completing the game (e.g. The Incredible Machine ). We currently specify a simple rule of 20xthe time of the particular walkthrough for each game on VideoGameBench. For VideoGameBench Lite, we impose a similar restriction but assume each step is one second of playtime . VideoGameBench Max Runtime Zelda: Link’s Awakening (DX) 34:50:40 Doom II 81:44:00 Kirby’s Dream Land DX 6:21:00 Need for Speed 30:32:20 The Incredible Machine 43:03:00 Pokemon Crystal 226:30:40 Civilization 160:15:20VideoGameBench Lite Max Steps Zelda: Link’s Awakening (DX) 101,440 Doom II 297,840 Kirby’s Dream Land (DX) 21,603 Figure 3: To determine when a run ends in VideoGameBench, we provide a bound of 20×the length of the scraped walkthroughs in Table 5. For VideoGameBench Lite, we use a similar metric, but determine time by translating one step to one second. Time is in the format (hours:minutes:seconds). For information on the stricter set of rules we applied for our experiments due to budget constraints, see Appendix D.1. B Prompts for VideoGameAgent Experiments In this section, we provide all prompts used for our experiments. These prompts also serve as a starting point for future agents on VideoGameBench, and can be modified if they fall within the rules. B.1 Emulator-specific Prompts We provide prompts that are specific to a particular emulator. In the MS-DOS prompt for our experiments, we incorrectly included a paragraph about an overlay in our prompt, but note that this overlay was not used in any experiments in § 4. Instructions for MS-DOS Games You are a computer agent that uses the ReACT (Reasoning, Action, Observation) framework with memory to play a video game. For each step, you should: 1. Think: Analyze the current state and decide what to do next 2. Action: Choose one of the following actions: - click [options as action_input]: Click the mouse at the current mouse position. Options include: * right: Right click instead of left click (default is left click) * shift: Hold shift while clicking * ctrl: Hold ctrl while clicking * alt: Hold alt while clicking Multiple modifiers can be combined with +, e.g. "shift+ctrl" - move x,y: Move the mouse to (x, y), where x is 0 on the left and 640 on the right, and y is 0 on the top and 400 on the bottom. - drag x,y: Drag (move while left button is down) to (x, y) from the current mouse position, where x is 0 on the left and 640 on the right, and y is 0 on the top and 400 on the bottom. - scroll_up amount: Scroll up by the specified amount 15 - scroll_down amount: Scroll down by the specified amount - write text: Type the specified text - press_key key: Press a
https://arxiv.org/abs/2505.18134v1
specific key or key combination - hold_key key[,duration]: Hold a key down for a specific duration (default 0.5 seconds) 3. Observation: You will receive the result of your action You will interact with the game via the keyboard and mouse actions. To help you with mouse actions, we provide a thin red grid overlay that intersects the screen at 100x100 pixel intervals (labelled with coordinates divided by 100). I also added 4 blue dots 25 pixels away in each direction with their exact coordinates in case you get lost. The coordinates start at (0,0) at the top left of the screen, indexed (x,y) and go up to (640,400) at the bottom right. For example, if you want to click somewhere inside a box with top left corner at (100,100) and bottom right corner at (150,150), you can move to (125,125) then click (estimate based on the picture! Try to get it as close as possible, don’t rely on multiples of 10). For keyboard actions, use the following format: - Single keys: "KeyA", "KeyB", "Digit1", "ArrowLeft", "ArrowUp", "Enter", "Escape", "Backspace", "Tab", "Space" - Special keys: "Control", "Alt", "Shift", "Meta" - Key combinations (use + symbol): "Control+KeyC", "Shift+ArrowUp", "Alt+F4" - Sets of Key combinations (multiple keys pressed at the same time): "KeyA,Shift+KeyB" Respond in the following JSON format: { "thought": "your reasoning about what to do next", "action": "one of the available actions", "action_input": "parameters for the action", "memory": "important information to remember for future steps" } To not update memory, respond with an empty string. For example: { "thought": "I need to left click on the search box", "action": "click", "action_input": "", "memory": "1. My short term plan is to capture the enemy flag.\n 2. My opponent is trying to block my path, I should be wary.\n 3. Farms make my units stronger. 4. The M button is to move units." } Another example of right clicking: { "thought": "I need to right click on the search box", "action": "click", "action_input": "right", "memory": "" } Or for keyboard actions: { "thought": "I need to move the character left in the game", "action": "press_key", "action_input": "ArrowLeft", "memory": "The character moves faster when holding the arrow key down instead of tapping it." } Do NOT wrap anything in “‘json“‘ tags, and only respond with the JSON object. 16 Always analyze the screenshot carefully to determine the correct coordinates for your actions. The memory field should contain any important information you want to remember for future steps. We also separately prompt the Game Boy emulator games for the Lite version of the benchmark. Although we could have done the same for the MS-DOS games, we chose not to. Instructions for Game Boy Games You are an AI agent playing a Game Boy game. You will receive game screens as images and must decide what buttons to press. Feel free to skip the start screen and play a new game. Your goal is to play the game effectively by analyzing the visual information and making appropriate decisions. You should respond with a list of (or single) actions
https://arxiv.org/abs/2505.18134v1
to perform in sequence (each is performed for roughly 1/4 second) ard wrapped in “‘actions“‘ tags. You can repeat actions to maintain pressure on the buttons. To press multiple buttons simultaneously, group them in a tuple. Example response format: “‘actions [ "A", # Press A button ("B", "UP"), # Press B and UP simultaneously "RIGHT", # Press RIGHT "START", # Press START ("A", "B", "DOWN") # Press A, B, and DOWN simultaneously ] Instructions for Game Boy Games on VideoGameBench Lite You are an AI agent playing a Game Boy game. You will receive game screens as images and must decide what buttons to press. Feel free to skip the start screen and play a new game. Your goal is to play the game effectively by analyzing the visual information and making appropriate decisions. You should respond with a list with a single (or tuple of) buttons to press for the Game Boy emulator (each is performed for roughly 1/2 second or 30 frames) ard wrapped in “‘actions“‘ tags. Please do not add comments to the response. Example response format (press A twice): ‘‘‘actions [ ("A"), ("A"), ] ‘‘‘ Another example of pressing multiple buttons simultaneously: ‘‘‘actions [ ("A", "B", "DOWN") ] ‘‘‘ Never press START and SELECT simultaneously, as this will restart the emulator. Available buttons: A, B, START, SELECT, UP, DOWN, LEFT, RIGHT B.2 Game-specific Prompts We include game prompts provided to VG-Agent while playing each game on VideoGameBench (specifically the test split used in § 4. 17 Instructions for “Civilization I” You are playing Civilization on DOS. Your goal is to build a city, research technology, and complete this campaign. You will be playing as the Romans in Chieftain mode (the easiest mode) and there will be 7 civilizations. You can click and drag the mouse to select objects, characters, buildings, etc. and also use your keyboard keys to move. Game Objectives Win Conditions: Achieve global dominance by conquering all other civilizations or be the first to send a spaceship to Alpha Centauri. Core Goals: Expand your civilization by building cities, managing resources, advancing technology, engaging in diplomacy, and waging war if necessary. If you get stuck on a screen, try clicking or pressing enter to see if the screen changes! If you’re trying to move your units and they are not moving, it means you are trying to move over invalid terrain -- try another direction or action! Make sure to remember all of the following facts: 1. Ground troops cannot walk through water (the blue regions), mountains, or other obstacles. 2. End your turn when you’re finished with what you want to do. 3. Each unit moves 1 tile. So if you want to move another unit, move the selected unit first. 4. In the beginning, a good strategy is to just explore and have your units move around and explore unseen areas. General Controls Mouse: Click to select units, cities, and menu options. Right-click may provide additional info. Keyboard Shortcuts: Movement: Arrow keys (or Numpad) to move selected unit. End Turn: Enter key. Access City Menu:
https://arxiv.org/abs/2505.18134v1
Click on a city or press C. Change Tax/Science/Luxury Rates: F1 (Tax Advisor). Access Civilopedia: F10. Save Game: Shift + S. Load Game: Shift + L. View Military Advisor: F3. View Diplomacy Screen: F4. View World Map: F5. City Management Found City: Move settler unit to an empty tile and press B. Manage Production: Click on a city, then choose what to build (units, buildings, wonders). Adjust Citizen Tasks: Click on tiles within city radius to assign workers. Unit Management Move Units: Use arrow keys or Numpad. Fortify Unit: Press F to make the unit stay in place and defend. Skip Turn: Spacebar. Disband Unit: Press D. Activate Next Unit: N key. Combat & Diplomacy Attack Enemy: Move military unit onto an enemy unit or city. Negotiate Diplomacy: Open diplomacy screen (F4) and select a civilization 18 to negotiate with. Declare War: Refuse enemy demands or attack their units/cities. Research & Progression Select New Technology: Open the Science Advisor menu (F6) and choose a research path. Advance Eras: Progress by researching key technologies. Build Wonders: Unique, powerful structures that provide long-term benefits. Unit Management Move: Arrow keys or Numpad (1-9, except 5) for movement. Fortify (F): Increases defense by 50%, unit remains stationary. Sentry (S): Unit does not require orders each turn but activates when an enemy approaches. Skip Turn (W): Temporarily skip a unit and return to it later. No Movement (Spacebar): Ends the unit’s turn without action. Activate Fortified/Sentry Unit: Click the unit or select from the city menu. Unload from Ship (U): Unloads units from a transport ship. Go To Command (G): Orders the unit to move to a specified location. Change Home City (H): Assigns a new home city, used for air unit refueling. Disband (Shift + D): Removes the unit permanently. Pillage (Shift + P): Destroys improvements on a tile. City Management Found a City (B): Settler establishes a new city. Grow a City (B in existing city): Settler increases city population. Manage Production: Select what to build (units, buildings, wonders). Adjust Citizen Tasks: Assign workers to different tiles. Set Tax/Science/Luxury Rates (F1): Adjust economy. Instructions for “The Need for Speed” You are playing Need for Speed. You are sitting in your car at the starting line, ready to compete. Your goal is to win the race on each track. You need to shift gears (up) when starting the race to move forward. Move around the menu screen using the arrow keys (up and down), press Enter to start. Controls: - Steering: Arrow keys ( ← →) or move mouse left/right - Accelerate/Brake: Up/Down arrows ( ↑ ↓) or Left-click/Right-click mouse - Shift Gears: A (shift up) / Z (shift down) - Hand Brake: Spacebar - Horn: H - Camera Views: C - Pause: ESC or P Additional Function Keys: - F1: Toggle window size - F3: Toggle view distance - F5: Mute sound - F7: Toggle status bar - F9: Toggle dashboard To go forward after starting the race, you need to first shift gears up (e.g. press A), then hold up arrow, e.g. hold_key ArrowUp
https://arxiv.org/abs/2505.18134v1
5000 (or some other large number). Be careful not to select too high of a number so you can react and turn (you need to move forward while turning to drift). 19 Get ready to race! Use your skills to outmaneuver opponents and claim victory on every track. Instructions for “The Incredible Machine” You are playing The Incredible Machine, a puzzle game where you must use various mechanical objects and devices to solve Rube Goldberg-style challenges. Your goal is to place and arrange the provided objects to complete each puzzle’s objective. Game Controls: Mouse Controls: Left Click: Select and place objects Right Click: Remove placed objects Click and Drag: Move objects around the puzzle area Keyboard Controls: Space: Start/Stop the machine R: Reset the puzzle ESC: Access menu F1: Help screen Game Mechanics: - Each puzzle provides specific objects you can use - Objects must be placed in the correct positions to create chain reactions - Physics affects how objects interact (gravity, momentum, bouncing) - Some objects need to be precisely positioned to work properly Common Objects: - Ropes and pulleys: Transfer motion - Conveyor belts: Move objects horizontally - Springs: Bounce objects - Motors: Provide continuous rotation - Gears: Transfer rotational motion - Balls and bowling balls: Roll and bounce - Cats and mice: Animals that react to each other - Balloons: Float upward - Electrical switches: Trigger connected devices Problem-Solving Tips: 1. Examine the puzzle goal carefully 2. Study the available objects 3. Consider how objects will interact 4. Test your solution in parts 5. Make small adjustments for timing 6. Watch for unintended interactions 7. Use gravity to your advantage Remember: - There are often multiple solutions to each puzzle - Timing is crucial for many puzzles - Some objects may not be needed - Pay attention to object orientation - Chain reactions should flow naturally - Save working parts while experimenting with others If stuck: - Reset and try a different approach - Watch how objects interact during test runs - Break down complex solutions into smaller steps - Consider unconventional uses for objects - Look for visual hints in the puzzle design 20 Instructions for “Pokemon Crystal” You are playing Pokemon Crystal version. Your goal is to navigate the world, catch and train Pokemon, battle gym leaders, and progress through the game. You start as a young trainer in Pallet Town, choosing between Bulbasaur, Charmander, or Squirtle as your starter Pokemon. Your ultimate goal is to defeat the Elite Four and become the Pokemon Champion, while completing your Pokedex by catching all 151 original Pokemon. Analyze the current game screen and decide what buttons to press. Respond with a sequence of actions to perform. Think step by step: 1. What is happening in the current screen? 2. What action would be most appropriate? 3. What buttons need to be pressed to take that action? Available buttons: A, B, START, SELECT, UP, DOWN, LEFT, RIGHT Tips: 1. To get past any menu or typing screen, press START or START, A when you are done. No matter where your
https://arxiv.org/abs/2505.18134v1
arrow is on the screen, it’ll go to the end. 2. When trainers see you, they will want to battle. 3. In a Pokemon battle, you attack your enemies and you lose if your Pokemon all reach 0 HP. 4. When typing a name, just press A twice to exit when your name is full. Don’t go right then A. 5. Wild Pokemon appear randomly when walking in tall grass, caves, or while surfing. 6. During battles (using the movement keys to move icons): - Choose "FIGHT" to use your Pokemon’s moves - Choose "BAG" to use items like Potions or Pokeballs - Choose "POKEMON" to switch to a different Pokemon - Choose "RUN" to attempt escaping from wild Pokemon battles 7. Type advantages are crucial: Water beats Fire, Fire beats Grass, Grass beats Water 8. Use Pokemon Centers (buildings with red roofs) to heal your Pokemon for free 9. Buy supplies like Pokeballs and Potions at PokeMarts (buildings with blue roofs) 10. Read dialogue and continue by pressing ’A’. Each movement key (e.g. UP, DOWN, LEFT, RIGHT) will move your character (with the hat) one tile in that direction. Keep that in mind, and calculate where to go based on what you want to do. You can interact with people (you should to get information and also proceed in the game) using the A button by standing next to them. Instructions for “Doom II” You are playing Doom II on DOS. Your goal is to complete the game by defeating all the levels. Explore all rooms as much as possible. The enemies are wearing red and green with dark green heads. If they are not moving, they are probably dead. Do not confuse them with random objects. You are playing the regular difficulty of "Hurt me plenty" -- never modify it! Do NOT FORGET the following: 1. Look for doors, which will be in the corridors and have some kind of writing on the door (e.g. UAC is a door). You can open them! 21 Try aligning yourself so the door is centered on your screen, then walk up to it. When you’re pressed against the door, press space to open it. 2. Doors usually have blue triangles (they themselves are not doors) near them on the sides, and it will be obvious you can open it. 3. You need to be directly in front of the door and press ’space’ to open it, you cannot be far away to open it. Don’t just go backwards because you’re not sure. 4. If you get stuck on a wall or moving against a wall, try taking a few steps back and re-adjusting your thoughts. 5. If there are a lot of enemies or you are being shot at, try strafing around and moving a lot side to side to avoid getting fired at while also aiming and shooting. Remember exactly what direction you were turning so you don’t make redundant movements. Use the repeated key presses to turn. You can also move your character to adjust your aim. If YOU SHOT AND
https://arxiv.org/abs/2505.18134v1
IT DID NOTHING, IT PROBABLY MEANS YOU WERE OFF TARGET. Re-adjust. Keep moving forward until your screen doesn’t change. Do not gaslight yourself into thinking you’re stuck at a door when you are not. You will know easily when you are stuck. You should aim and kill any enemy you see, do not just walk past them, as they will shoot you once you pass them. If it does walk past you, remember that it is now behind you in your memory -- do not forget, and turn around and make sure you eliminate them. If your screen flashes red it might mean that you’re being shot, turn around if you dont see an enemy in front of you to check is an enemy is shooting you from the back. Think through your actions and be patient -- do not rush shooting until the enemy is in the center of your screen. When aiming at the enemy, ignore your previous thoughts -- they applied to your old observations. For example, if they were centered in your thoughts before, they might not be centered anymore, and you will need to re-adjust. Look at your most recent observation to make sure. Just because you see an enemy or enemies in front of you does NOT mean you should shoot. Carefully align your gun so they are exactly at the center. Don’t just blindly shoot. Focus on ONE enemy at a time. If you defeat that enemy, re-aim to focus on the second, don’t keep blankly shooting, even if your thoughts tell you to. You need to aim. Move with the arrow keys to adjust your aim, more controls below. In general, you can chain multiple keys in sequence (e.g. if the enemy is slightly to the right, ArrowRight,Control) Basic Controls: - Repeat ‘ArrowLeft‘ or ‘ArrowRight‘ 14 times separated by commas to turn left and right 90 degrees if you want to turn a corner. - Repeat W multiple times separated by commas, e.g. press_key W,W,W,W,W,W if you want to explore forward. - Control: Fire weapon - Space: Open door / interact For finer control when dealing with enemies: - ArrowLeft, ArrowRight: Look left and right to adjust your aim and look around - W, A, S, D: Move forward, left, back, right 22 - Shift + W, A, S, D: Run forward, left, back, right Instructions for “Kirby’s Dream Land” You are playing Kirby’s Dream Land on the Game Boy. You control Kirby, a round hero who can walk, jump, inhale enemies, and fly. Your goal is to progress through levels, defeat enemies, and overcome bosses to eventually save Dream Land. Analyze the current game screen and decide what buttons to press. Think through each situation step by step: 1. Assess the current screen: - What enemies or obstacles are present? - Is Kirby on the ground or in the air? - Are there any platforms or doorways? - Is there a boss battle happening? 2. Consider your options: - Do you need to avoid enemies? - Should you inhale enemies to use as
https://arxiv.org/abs/2505.18134v1
projectiles? - Is flying a better option than walking? - Are there items or power-ups to collect? 3. Plan your next action and execute using the available controls: MOVEMENT CONTROLS: - LEFT/RIGHT on Control Pad: Move Kirby left/right - UP on Control Pad: Enter doorways or fly upward - DOWN on Control Pad: Crouch and swallow inhaled enemies ACTION BUTTONS: - A Button: Jump - B Button: Inhale enemies/objects or spit them out as projectiles - START Button: Pause the game VITALITY AND CHANCES: - Kirby has 6 vitality bars that decrease when hit by enemies - Losing all vitality bars costs one chance (life) - Game ends when all chances are depleted SPECIAL NOTES: - You can float indefinitely by repeatedly pressing the A button - Inhaled enemies can be used as projectile weapons - During boss battles, watch the boss’s vitality bar above Kirby’s - If stuck, you can reset the game by pressing A, B, START, and SELECT simultaneously A few things you should ALWAYS remember: 1. For things that say "IN" or black doors / light doors, Kirby has to go into it to go into a room. Don’t just hover above it. 2. Shining stars (called warp stars) are the end of the level, and transition you further into the game. Kirby has to go into it or step on it. 3. Do not hit enemies directly, or Kirby will take damage. Spit out enemies (not bosses) or items like bombs back to damage your enemies! Kirby is a classic platformer, so you generally should continue to the right to progress in the game. 23 Respond with a clear sequence of actions, explaining your reasoning for each decision. Available buttons: A, B, START, SELECT, UP, DOWN, LEFT, RIGHT Instructions for “The Legend of Zelda: Link’s Awakening (DX)” You are an expert playing The Legend of Zelda: Link’s Awakening on Game Boy. Your goal is beat the game flawlessly by navigating the world, solving puzzles, defeating enemies, and progressing through dungeons. You cannot walk through walls or doors, so try stepping back or around them! Controls: - Arrow keys: Move Link around, move options around in a menu. - A: Use Item Box A. Also for proceeding with dialogue. - B: Use Item Box B. - START: Open inventory. - SELECT: View map or switch items Analyze the current game screen and decide what buttons to press. Respond with a sequence of actions to perform. Think step by step: 1. What is happening in the current screen? 2. Are there enemies, NPCs, or interactive objects? 3. What action would help progress in the game? 4. What buttons need to be pressed to take that action? You cannot move if dialogue is on the screen until you finish it, so keep pressing A until it is over. Available buttons: A, B, START, SELECT, UP, DOWN, LEFT, RIGHT C VideoGameBench Details Figure 4: VideoGameBench features a set of 20 video games from the MS-DOS and Game Boy platforms that VLMs are tasked with playing to completion. In this section, we
https://arxiv.org/abs/2505.18134v1
provide additional information on the VideoGameBench setup. We show all the public video games in the devandtest splits of VideoGameBench in Figure 4. We also provide more information of checkpoint detection. C.1 List of Available Actions Here, we list all the actions available that an agent can take on VideoGameBench for both Game Boy and MS-DOS games. For key or button presses, we explicitly allow the agent to specify how long 24 they hold the keys for, but found this to be confusing because it is difficult to embed time into the agent’s context. We use a default time of half a second for key / button presses, but provide the option to change this default setting in the benchmark. We also allow agents to specify the length of their key presses. C.1.1 List of Game Boy Actions Using the PyBoy (Mads Ynddal, 2025) interface, we enable agents on VideoGameBench to press the following buttons: •A— Press the Abutton on the Game Boy emulator •B— Press the Bbutton on the Game Boy emulator •SELECT — Press the SELECT button on the Game Boy emulator •START — Press the START button on the Game Boy emulator •RIGHT — Press the right arrow on the Game Boy emulator D-Pad •LEFT — Press the left arrow on the Game Boy emulator D-Pad •UP— Press the up arrow on the Game Boy emulator D-Pad •DOWN — Press the down arrow on the Game Boy emulator D-Pad C.1.2 List of Keyboard and Mouse Actions Using the Playwright (Mads Ynddal, 2025) interface with JS-DOS (Guryanov, 2025), we enable agents to use the mouse and keyboard interface in the following way: •click — Click the mouse at the current position. You can specify options like right-click or modifier keys (e.g., Shift, Ctrl, Alt). •move — Move the mouse to a specific 2D pixel coordinate on the screen, e.g., "150,200" . •drag — Drag the mouse to a target coordinate, simulating a click-and-drag motion. •scroll_down — Scroll down by a specified number of pixels. •scroll_up — Scroll up by a specified number of pixels. •write — Type a string of text as keyboard input into the emulator. •press_key — Press a single key or a sequence of keys (e.g., "Enter" ,"Ctrl,C" ). •hold_key — Press and hold a key for a specified duration in seconds (e.g., "A,1.5" to hold Afor 1.5 seconds). C.2 Verifying that VideoGameBench is capable of being completed While each game in VideoGameBench has been completed by at least one human, we also provide evidence that our interface does not cause games to be impossible to solve. Using PyBoy and Playwright , we give agents sufficient access to press any button or set of buttons simultaneously that the emulator allows. Our human interface therefore enables the same controls as the emulator. We focus instead on the information given to the agent, and only provide the human with the same information. In other words, the human only sees the most recent frame before taking an action. Using this interface, our co-authors manually beat the first checkpoint of Kirby’s
https://arxiv.org/abs/2505.18134v1
Dream Land and Doom II , both of which require real-time inputs. C.3 Scraping Checkpoints from Walkthroughs We provide automatically trackable progress markers on VideoGameBench by scraping checkpoint frames from online available walkthroughs. In previous VLM-based playthroughs of video games such as Gemini Plays Pokemon (Joel Z, 2025), progress was manually marked by viewers. To help scale these experiments to several different games, we provide a system of finer, automatic tracking of progress. 25 Each video game in VideoGameBench has an associated walkthrough available through video streaming platforms such as YouTube. These playthroughs contain full, unedited gameplay of a human playing each video game to completion. Many of these videos contain timestamps with game checkpoints that we used to determine checkpoints, but we also scrape other suitable checkpoints as well. To add a checkpoint to a game, we require a (image-frame, timestamp) pair, which can be found directly from these videos. C.4 How Checkpoints were Determined All games in VideoGameBench have a linear progression, which means there is a certain pre-defined path that a player must follow to complete the game. For example, while there are infinite possible trajectories in the game Pokemon Red , there are certain unavoidable story checkpoints (e.g. grab each Gym badge, defeat the Elite 4, etc.) that the player must reach. Furthermore, many of these story checkpoints are actually rendered the same. For example, when a player in Pokemon Red defeats a Gym Leader, the game will always render the same background with the player facing the Gym Leader and the same text, regardless of the previous trajectory taken by the player up to this point. We use these frames as checkpoint frames, and estimate progression based on when an online walkthrough has reached this frame. In our setting, the easiest checkpoints to scrape are typically time-stamped for online walkthroughs, but there are often more available checkpoint frames that can be used. Adding more checkpoint frames increases the granularity of automatic progress detection, which we plan to expand on in the future. C.5 Perceptual Image Hashing for Checkpoint Detection We provide a baseline hamming distance (Hamming, 1950) threshold of <12for determining if two frames match, but enable the option to tune this parameter individually for each checkpoint in each game. For example, for Kirby’s Dream Land , we choose between 6−8for this threshold. One limitation of applying perceptual hashing over full game screen images is when distinctions between checkpoints are local to a certain patch of the image. For example, in The Incredible Machine , the level completion screen is a tiny textbook that indicates that a level was completed. We are planning to implement features for defining a rectangular crop of the screen and comparing hashes for these crops rather than the full screen. Another limitation that requires tuning is that scraped frames may have slight differences or artifacts than the frame in the playthrough. For example, the frame may contain the current player’s health bar such as in Kirby’s Dream Land , which may differ from the health bar in thte walkthrough at that frame.
https://arxiv.org/abs/2505.18134v1
Other issues include different aspect resolutions from scraped walkthroughs, all of which can be solved with manual tuning. We want to provide a robust system for automatic checkpoint detection as scores on VideoGameBench continue to improve. C.6 Benchmark Statistics We provide a distribution for the length of each game in VideoGameBench, as well as the checkpoints scraped for the test split. In Figure 5, we show the distribution of checkpoints on VideoGameBench. C.7 Links to Walkthroughs We provide links in Table 5 to all walkthroughs used for scraping checkpoints, as well as estimating game lengths. Each of these walkthroughs assumes full knowledge of the game and does not consider time spent exploring. D Experiment Details In this section, we provide additional information on the experiments in § 4. We provide criterion we used to determine the end of a run, cost information for the main experiments, and a variance analysis of for the best performing model. 26 Figure 5: VideoGameBench checkpoint lengths. We show the length of each game walkthrough and the position of each checkpoint as a black divider. Checkpoints in VideoGameBench are mapped to the timestamp it was scraped from in an online walkthrough video to determine the percentage of the game that was completed. Table 5: List of longplay walkthrough links for scraping checkpoints on VideoGameBench. Test Set Games Walkthrough Link Legend of Zelda DX https://www.youtube.com/watch?v=rLFXGs1Rr6c Doom II https://www.youtube.com/watch?v=nhmRxFf02JA Kirby’s Dream Land DX https://www.youtube.com/watch?v=n8CSolb0hjc Civilization I https://www.youtube.com/watch?v=o6_5PIsJkYk Need for Speed https://www.youtube.com/watch?v=1Sf7_TbG8Js The Incredible Machine https://www.youtube.com/watch?v=pTbSMKGQ_rU Pokémon Crystal https://www.youtube.com/watch?v=HQEaaIuyKAM DevSet Games Doom https://www.youtube.com/watch?v=Q4GiCg_m8wA Quake https://www.youtube.com/watch?v=Y8k9c-6Me_A Civilization https://www.youtube.com/watch?v=o6_5PIsJkYk Warcraft II https://www.youtube.com/watch?v=DWjZQzviTUs The Oregon Trail Deluxe https://www.youtube.com/watch?v=FaWmldKoo9Y X-COM UFO Defense https://shorturl.at/BPIEb Scooby Doo: Classic Creep Capers https://www.youtube.com/watch?v=yYfpaz9NwU8 Prince of Persia https://www.youtube.com/watch?v=qMOBiT3F6AM Age of Empires https://www.youtube.com/watch?v=hqsbQzpFRBI Pokémon Red https://www.youtube.com/watch?v=hYcotDHI0dg Super Mario Land https://www.youtube.com/watch?v=vfaV2VDBQEk Castlevania: The Adventure https://www.youtube.com/watch?v=oL5TwYG0iQ8 Donkey Kong Land 2 https://www.youtube.com/watch?v=tqUU9pwUxQo Mega Man: Dr. Wily’s Revenge https://www.youtube.com/watch?v=wpwESPNjAk0 D.1 Criterion for Ending Runs Due to budget constraints, the experiments in § 4 were ended when the agent was stuck or not progressing for a certain period of time. In this section, we detail exactly the constraints we used per game to determine whether a run should be pre-maturely ended. We ended a run if: 1.The agent quits the emulator or puts the game in a “locked” state . In DOS or GBA games, if the agent quits the entire game (which requires multiple steps), it is unable to restart the game. 2.The game provides the agent multiple lives, and the agent has a “Game Over” screen by losing them all (e.g. Kirby’s Dream Land andSuper Mario Land ). Otherwise, if the agent loses in the same location more than three times, we also end the run. 3.The agent is “stuck” (i.e. the exact same screen) for more than 100 steps. We reason that the context window is only 20 steps, so the conditional probability on this context of getting unstuck is low. 27 4.The agent loses without damaging any enemies, indicating little progress can be made in multiple repeated trials. 5.The agent uses $30 ( ∼2000 steps) without reaching a new checkpoint. This
https://arxiv.org/abs/2505.18134v1
was mainly to avoid long loops like in Doom II where the agent repeatedly revisits the same locations. D.2 Main VideoGameBench Experiments Cost We report the cost per experiment of Table 2. Discrepancies in costs do not necessarily reflect differ- ences in model costs – rather, some runs were ended early according to the criteria in Appendix D.1. Note because these criteria were not automatically checked, some runs may have run for slightly longer and therefore incurred a slightly higher cost. We observed that in practice this extra amount of time did not influence or change any of the results. Table 6: Cost of each run on the VideoGameBench test split from Table 2 in USD($). Note that some runs were ended early when the agent was stuck or making no meaningful progress. Certain models are also significantly cheaper per token than others. VideoGameBench GPT-4o Sonnet 3.7 Gemini 2.5 Pro LLaMA 4 Gemini 2.0 Flash Civilization I $14.46 $30.00 $5.29 $0.42 $10.05 The Need for Speed $0.50 $0.62 $0.38 $0.04 $0.08 The Incredible Machine $1.36 $5.26 $1.31 $0.11 $0.18 Pokemon Crystal $15.04 $29.64 $4.78 $0.14 $3.88 Doom II $0.17 $0.40 $3.79 $0.03 $0.01 Kirby’s Dream Land $8.91 $2.98 $3.89 $0.09 $0.16 Link’s Awakening (DX) $7.86 $20.00 $18.51 $0.09 $0.18 D.3 Variance Analysis for Main Experiments Using the best performing agent on VideoGameBench (Gemini 2.5 Pro), we re-run experiments five times for Kirby’s Dream Land , which is the only game where the agent reaches a checkpoint (it completes 4.8%of the game). We also re-run experiments five times for Doom II andThe Incredible Machine , where the agent had no progress. In five runs, the sample variance on Doom II andThe Incredible Machine is 0, while for Kirby’s Dream Land it is0.2. Table 7: We take the best performing model on VideoGameBench, Gemini 2.5 Pro, and re-run it five times on Kirby , which is the only game where it reached at least one checkpoint. We also run it five times on two games, Doom II andThe Incredible Machine , where it had no progress. Gemini 2.5 Pro on VideoGameBench Run 1 Run 2 Run 3 Run 4 Run 5 Kirby’s Dream Land 1 0 1 1 1 The Incredible Machine 0 0 0 0 0 Doom II 0 0 0 0 0 E Qualitative Analysis We provide specific examples of trajectories taken when generating Table 2 and Table 3. These examples provide qualitative evidence to support claims made in § 4.2. E.1 The Knowing-Doing Gap. In the example below, we show how a frontier VLM (Llama 4 Maverick) writes out what it should be doing (e.g. moving down towards the door to exit the room and explore outside) but it repeatedly presses "down" on the agent without aligning it actually face the door. In the rest of the trajectory, the agent continually repeats this action and is stuck on the same screen. 28 Llama 4 Maverick playing The Legend of Zelda: Link’s Awakening (VideoGameBench). Step 130 Thought (after image) The current screen shows Link standing near the exit of the house. To
https://arxiv.org/abs/2505.18134v1
progress, I need to exit the house by moving down. Action (after image) DOWN E.2 Frontier models struggle to correctly process visual inputs. The following example shows an example of the model struggling to understand the screen. After clearing the starting room on the first level of Doom II , the Gemini 2.0 Flash VG-Agent continues firing at the enemies, and outputs that it thinks “the enemy is still alive. I need to adjust aim again and fire.” In this trajectory, the agent eventually wastes all of its ammo and gets stuck in the first room. Gemini 2.0 Flash on Doom II (VideoGameBench Lite). Step 305 Thought (after image) I have adjusted aim slightly to the left and fired, but the enemy is still alive. I need to adjust aim again and fire. 29 Reflection (after image) Goal: Complete the game by defeating all the levels, exploring all rooms... Now I need to re-aim to the other enemy and fire again. The enemy is still slightly to the left, so I will adjust my aim and fire. n- I have adjusted aim slightly to the left and fired, but the enemy is still alive. I need to adjust aim again and fire. Action (after image) press_key: ArrowLeft, Control (fire gun) In the next example, GPT-4o gets confused trying to interact with an NPC in The Legend of Zelda . In this room, the agent must first talk to the man on the right before they can leave the room. After exploring the starter room, the agent realizes it needs to talk to the NPC to grab a shield item. It sees that it is near the NPC and continues to believe that it is interacting with the NPC despite no dialogue appearing. It continues this behavior until later attempting to exit after falsely believing it has spoken to the NPC. GPT-4o on The Legend of Zelda: Link’s Awakening (VideoGameBench). Step 309 Thought (after image) **Current Situation and Strategy:** - Link is still beside the NPC. No visual confirmation of the shield yet. - Primary goal remains to acquire the shield from the NPC. - Continuing to engage with the NPC is necessary. **Next Steps:** 1. **Press "A" repeatedly** to complete the dialogue. 2. **Ensure the shield is obtained and visible** in Link’s inventory. 3. Once the shield is confirmed, plan to exit the house. Reflection (after image) - Must confirm shield acquisition before leaving the house. - Pressing "A" multiple times ensures the dialogue is complete. - Next objective is to explore outside once the shield is obtained. - Focus on completing the dialogue and shield acquisition. - Ensure shield acquisition is confirmed visually before proceeding. - Once confirmed, plan the path to exit the house. - Current priority is obtaining the shield from the NPC. 30 Action (after image) "A", # Initiate or continue dialogue. "A", # Ensure dialogue completion. "A" # Final confirmation for shield. E.3 VG-Agent lacks planning and memory management abilities We first expand on the example provided in the main paper, and also provide a visually obvious
https://arxiv.org/abs/2505.18134v1
example below. In the Doom II on VideoGameBench Lite example, we found that all agents did not explicitly track or map out their surroundings. For agents that survived all the enemies, they often repeatedly walked back and forth between the starting area and first corridor, never making it to the ending area. For context, the first checkpoint / level of Doom II only requires taking two turns down a hallway and was completed by a co-author with no prior game experience in under 20 seconds. We show another qualitative example below of GPT-4o onPokemon Crystal forgetting that it had collected its first Pokemon after the first checkpoint, as well as forgetting its objective was to run an errand. It overwrites its textual scratch pad memory (the reflection) for a local objective, forgetting what it was trying to do. GPT-4o on Pokemon Crystal (VideoGameBench). Step 698 Thought (before image) 1. **Current Situation**: Still trying to exit the lab. 2. **Objective**: Leave the lab to head towards Mr. Pokémon’s house. 3. **Analysis**: Ensure proper alignment with the exit carpet and attempt to move down. Reflection (before image) - Received Cyndaquil as my starter Pokémon. - Skipped the nickname option to proceed quickly. - Objective: Visit Mr. Pokémon as directed by PROF. ELM. - Need to exit the lab and start the journey towards Mr. Pokémon’s location. - Route to Mr. Pokémon goes through Cherrygrove City. - Learned to heal Pokémon with the lab machine if needed. - Currently finishing dialogue with the lab assistant. - Received a Potion from the assistant, which is now in the ITEM POCKET. - Currently engaged in a dialogue about the lab staff being busy. - Attempting to exit the lab to begin the journey. Thought (after image) **Step-by-Step Plan:** 1. **Current Situation**: Stuck at the lab exit. 2. **Objective**: Exit the lab and start the journey to Mr. Pokémon’s house. 3. **Analysis**: Confirm alignment with the exit and check potential obstacles. 31 Reflection (after image) - Need to ensure proper alignment and navigation to exit buildings. - Current focus is on successfully leaving the lab to progress the storyline. - Stuck at the exit requires paying attention to visual cues for proper movement. Action (after image) UP, DOWN, DOWN F Practice Games In this section, we expand on our three practice games defined in § 4 and Table 4. We also verify that each game is solvable by humans playing through the VideoGameBench interface. F.1 Location Clicking Game Figure 6: An example screen of the Location Clicking Game . A VG-Agent using a VLM is tasked with clicking 10 green circles, one at a time, in under 250 actions. The most basic action in any DOS game is to click a position on the screen. The Location Clicking Game is a simple task where an agent must click a green circle with radius 40px that randomly generates inside a 640px by 400px region on the browser (this replicates the DOS game setting on VideoGameBench that runs inside a 640px by 400px window). Each time the agent clicks the
https://arxiv.org/abs/2505.18134v1