diff --git "a/2511/2511.00839.md" "b/2511/2511.00839.md" new file mode 100644--- /dev/null +++ "b/2511/2511.00839.md" @@ -0,0 +1,1060 @@ +Title: CodeClash: Benchmarking Goal-Oriented Software Engineering + +URL Source: https://arxiv.org/html/2511.00839 + +Published Time: Tue, 04 Nov 2025 01:51:57 GMT + +Markdown Content: +John Yang*1, Kilian Lieret*2, Joyce Yang 3, Carlos E. Jimenez 2, + + Ofir Press 2, Ludwig Schmidt 1, Diyi Yang 1 + +1 Stanford University 2 Princeton University 3 Cornell University + +###### Abstract + +Current benchmarks for coding evaluate language models (LMs) on concrete, well-specified tasks such as fixing specific bugs or writing targeted tests. However, human programmers do not spend all day incessantly addressing isolated tasks. Instead, real-world software development is grounded in the pursuit of high-level goals, like improving user retention or reducing costs. Evaluating whether LMs can also iteratively develop code to better accomplish open-ended objectives without any explicit guidance remains an open challenge. To address this, we introduce CodeClash, a benchmark where LMs compete in multi-round tournaments to build the best codebase for achieving a competitive objective. Each round proceeds in two phases: agents edit their code, then their codebases compete head-to-head in a code arena that determines winners based on objectives like score maximization, resource acquisition, or survival. Whether it’s writing notes, scrutinizing documentation, analyzing competition logs, or creating test suites, models must decide for themselves how to improve their codebases both absolutely and against their opponents. We run 1680 1680 tournaments (25 25,200 200 rounds total) to evaluate 8 8 LMs across 6 6 arenas. Our results reveal that while models exhibit diverse development styles, they share fundamental limitations in strategic reasoning. Models also struggle with long-term codebase maintenance, as repositories become progressively messy and redundant. These limitations are stark: top models lose every round against expert human programmers. We open-source CodeClash to advance the study of autonomous, goal-oriented code development. + +††∗Equal contribution. Correspondence to [johnby@stanford.edu](mailto:johnby@stanford.edu),[kl5675@princeton.edu](mailto:kl5675@princeton.edu). + +1 Introduction +-------------- + +Existing coding benchmarks challenge language models (LMs) to complete small, focused tasks, such as implementing an algorithm(Jain et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib27)), fixing a specific bug in a single function(Jimenez et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib29)), or writing a test for a target class(Mündler et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib36)). Problem statements are straightforward and fine-grained in their description of a task. Given explicit instructions, models are evaluated on their ability to execute them correctly. + +On the contrary, real world software development demands a much broader scope of agency. Instead of maintenance tasks, developers are driven by high-level goals like improving user retention, increasing revenue, or reducing costs. This requires fundamentally different capabilities; engineers must recursively decompose these objectives into actionable steps, prioritize them, and make strategic decisions about which solutions to pursue. The process is a continuous loop – propose changes, deploy them, analyze real-world feedback (e.g., metrics, user behavior, A/B test results), and repeat to inform the next move. Evaluating how models fair under such conditions remains an unaddressed challenge in benchmarking. + +Therefore, we introduce CodeClash, a benchmark for goal-oriented software engineering. Specifically, multiple LM systems compete to build the best codebase for achieving a high-level objective over the course of a multi-round tournament. These codebases implement solutions that compete in a code arena, such as BattleSnake (grid-based survival), Poker (no-limit Texas Hold’em), and RoboCode (tank combat). Crucially, LMs do not play directly, unlike existing game-based benchmarks(Silver et al., [2016](https://arxiv.org/html/2511.00839v1#bib.bib51); OpenAI et al., [2019](https://arxiv.org/html/2511.00839v1#bib.bib39); Zhang et al., [2025a](https://arxiv.org/html/2511.00839v1#bib.bib68)). Instead, they iteratively refine code that competes as their proxy. + +As shown in Figure[1](https://arxiv.org/html/2511.00839v1#S1.F1 "Figure 1 ‣ 1 Introduction ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), each round proceeds in two phases: agents edit their code, then their codebases compete head-to-head in a code arena. The code arena then executes multiple implementations against one another and determines winners based on objectives like score maximization, resource acquisition, or survival. + +Success in CodeClash requires models to determine their own improvement strategies. From the outset, LM agents receive only a brief description of the setting. While information like arena mechanics, example bots, and recommended strategies are available in the starter codebase, models must take initiative to proactively discover them. Each round, LMs receive gigabytes of logs from past rounds, which they can parse to extract insights about outcomes and opponents – or ignore entirely. Across the span of a tournament, CodeClash reveals whether and how models populate their codebases with notes, tests, and analyses. + +We evaluate 8 8 frontier LMs across 6 6 arenas. We find CodeClash elicits substantial creativity from models; across 1680 1680 tournaments, we observe that a model’s solutions become increasingly dissimilar round over round, even when facing the same opponent in the same arena. However, our results reveal that while models exhibit diverse development styles, they share common limitations in interpreting competitive feedback, validating changes, and maintaining organized codebases over time. Even top models hallucinate reasons for failure or modify code without confirming if these changes meaningfully improve performance. A substantial gap remains between model and human performance; the best model (Claude Sonnet 4.5) fails to win a single round against an expert human-written bot. + +We release CodeClash as an open source toolkit, including the code, arena logs, and a leaderboard, to further the study of self-evolving, LM-based SWE-agents. + +![Image 1: Refer to caption](https://arxiv.org/html/2511.00839v1/x1.png) + +Figure 1: CodeClash is a benchmark where players (LMs as SWE-agents) compete in programming tournaments spanning multiple rounds. Per round, models edit their codebases (edit phase) before the codebases face off in a code arena (competition phase). Then, the competition logs are copied back into the codebases and the next round begins. + +2 CodeClash +----------- + +### 2.1 Formulation + +CodeClash formalizes competitive coding as a tournament, where two or more players compete in a code arena for multiple rounds. Player refers to an LM equipped with an Agent Computer Interface (ACI) or scaffold that enables it to interact with a codebase(Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61)). Each player maintains their own codebase for the entire tournament. A code arena is any competition platform that takes in multiple codebases and executes them against one another, producing measurable outcomes about relative performance on a designated objective (e.g., eliminating opponents, acquiring resources, maximizing profit). + +Each round proceeds in two phases. In the edit phase, each player independently modifies their codebase using whatever strategies they deem appropriate within a fixed budget of turns. During the competition phase, all codebases are compiled and executed within the code arena, where they interact and compete directly against each other. The arena determines a winner (or declares a tie) based on the codebases’ performance. + +CodeClash’s formulation makes several key design decisions. Codebase-as-memory: players have no explicit memory of actions from previous rounds. Their information is limited to whatever they chose to record in the codebase. Log-based feedback: after each competition phase, the results and logs are copied into each player’s codebase as the sole source of new information. Strategic opacity: players cannot see each other’s codebases, though we explore lifting this restriction in Section[4.1](https://arxiv.org/html/2511.00839v1#S4.SS1 "4.1 Ablations ‣ 4 Results ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +### 2.2 Technical Details + +To implement a player, we use mini-SWE-agent, an agent computer interface (ACI) that enables an LM to interact with a codebase by issuing bash actions to a terminal(Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61)). Each turn, the LM generates a ReAct(Yao et al., [2023](https://arxiv.org/html/2511.00839v1#bib.bib65)) style response containing a thought (in natural language) and a bash action, then receives standard output from the terminal environment in return. Next, we define a lightweight, flexible interface for a code arena. An implementation only needs to define commands to run the competition and determine a winner. This minimal overhead enables us to fold many existing competitive programming games and tasks into CodeClash. More technical discussion in §[A](https://arxiv.org/html/2511.00839v1#A1 "Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +### 2.3 Features + +CodeClash’s initial release features a suite of 6 6 code arenas, as listed in Figure[1](https://arxiv.org/html/2511.00839v1#S1.F1 "Figure 1 ‣ 1 Introduction ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Each arena is covered thoroughly in §[B](https://arxiv.org/html/2511.00839v1#A2 "Appendix B Arenas ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). CodeClash introduces several distinctive properties that collectively push models beyond traditional code completion and issue resolution. + +Open-ended objectives. CodeClash departs from the traditional reliance on unit tests or implementation correctness to measure success. Instead, players code to win competitive outcomes that vary dramatically across arenas, from maximizing profit to surviving the longest. This mirrors the ultimate objectives of real-world software more faithfully, where code is written to achieve tangible, practical outcomes (e.g., maximize resources, generate revenue, outperform competitors) rather than simply achieving technical correctness. A consequence of rich objectives is that models must then decompose a higher-order goal into actionable subtasks and measurable, intermediate metrics to inform code improvements. + +Diverse arenas. CodeClash’s arenas vary significantly, with drastic differences in a codebase’s structure, how a codebase interfaces with the arena engine, and the types of logs and feedback generated. This contrasts sharply with existing benchmarks, where evaluation follows a consistent pattern of problem statement, code implementation, and test validation. + +Adversarial adaptation. CodeClash’s uniquely multi-player, head-to-head setting adds a new layer of complexity to coding evaluations. While decent LMs may be capable of writing competent implementations, top-performing players will analyze opponent behaviors and incorporate countermeasures, all the while being indecipherable in their own play. Early round wins do not ensure continued dominance. At some point, the challenge shifts from writing good code to writing code that consistently beats intelligent competition. + +Self-crafted memory. As mentioned in Section[2.1](https://arxiv.org/html/2511.00839v1#S2.SS1 "2.1 Formulation ‣ 2 CodeClash ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), CodeClash does not maintain persistent memory for models across rounds; only ephemeral, within-round memory exists. To retain information for future use, models must explicitly add insights to the codebase; how to represent such knowledge is left entirely to the model’s discretion. + +Self-directed improvement. Beyond a brief description of the environment and arena, the initial system prompt provided to each player at the start of every edit phase contains no guidance beyond high level suggestions about how to enhance its codebase. All decisions and changes LMs make are necessarily autonomous. In practice, this may manifest as models writing analysis scripts to understand competition logs, maintaining notes about past rounds or opponents, or generating multiple candidates to test against one another. + +3 Experiments +------------- + +Models. We select 8 8 strong LMs to evaluate, where strength is roughly estimated as performance on existing coding benchmarks. Our final list includes two models from the Anthropic family (Claude Sonnet 4.5 4.5(Anthropic, [2025a](https://arxiv.org/html/2511.00839v1#bib.bib2)), 4 4(Anthropic, [2025b](https://arxiv.org/html/2511.00839v1#bib.bib3))), three models from the OpenAI family (GPT 5, 5-mini(OpenAI, [2025a](https://arxiv.org/html/2511.00839v1#bib.bib37)), o3(OpenAI, [2025b](https://arxiv.org/html/2511.00839v1#bib.bib38))), Gemini 2.5 Pro(Comanici et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib15)), Qwen3-Coder(Qwen, [2025](https://arxiv.org/html/2511.00839v1#bib.bib46)), and Grok Code Fast 1(x.ai, [2025](https://arxiv.org/html/2511.00839v1#bib.bib57)). + +Agent system. As discussed in Section[2.2](https://arxiv.org/html/2511.00839v1#S2.SS2 "2.2 Technical Details ‣ 2 CodeClash ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we use mini-SWE-agent. We intentionally decide against using tool-heavy scaffolds such as SWE-agent or OpenHands(Wang et al., [2025b](https://arxiv.org/html/2511.00839v1#bib.bib55)), as they are often optimized for models and benchmarks. By restricting interactions to bash commands, mini-SWE-agent avoids imposing predefined assumptions via tools about how LMs should approach codebase modifications or competitive play(Yang et al., [2024b](https://arxiv.org/html/2511.00839v1#bib.bib62)). Per round, models are allotted a maximum of 30 30 turns for the edit phase, with automatic termination if exceeded. Player configurations are discussed thoroughly in §[C.1](https://arxiv.org/html/2511.00839v1#A3.SS1 "C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Number of rounds run. For our main leaderboard, we make models compete one-on-one. Given 8 8 models and 6 6 arenas, we run 10 10 tournaments per model pair per arena, with each tournament lasting 15 15 rounds. This yields (8 2)×6×10×15=25,200\binom{8}{2}\times 6\times 10\times 15=25,200 total rounds. Tournament runtime varies by arena, taking 75 75 minutes on average – totaling 2.4 2.4 million hours of runtime (mostly due to model latency), parallelized over the independent tournaments. Tournament configuration details are covered in §[C.2](https://arxiv.org/html/2511.00839v1#A3.SS2 "C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Win rates. Performance per model is generally calculated as an aggregation across all tournaments (sets of 15 rounds) won across all arenas. A single round is won by a model if it achieves a higher score in the arena than its opponent or if its opponent makes an invalid submission. A tournament is won by the model that wins more rounds than its opponent, or, if both models win equally many rounds, by the model that scores the last win.2 2 2 Draws are a possible outcome for each round, so both models might achieve an equal number of wins in a tournament. In the very rare event of a tournament consisting only of draw rounds, the tournament is considered a draw. The win rate of a model is the fraction of tournaments it has won. For details, see §[C.3.1](https://arxiv.org/html/2511.00839v1#A3.SS3.SSS1 "C.3.1 Definitions ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Elo metrics. Inspired by the thread of prior work ranking LMs on the task of instruction following(Elo, [1967](https://arxiv.org/html/2511.00839v1#bib.bib18); Bai et al., [2022](https://arxiv.org/html/2511.00839v1#bib.bib5); Boubdir et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib8); Chiang et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib12)), we use Elo scores with a base rating of R=1200 R=1200 and a slope of 400 400 to quantify the overall strength of each model. Instead of calculating Elo scores using sequential updates (which require a choice of step size and depend on update order), we perform a more rigorous maximum likelihood fit to the win rates. We validate rank stability and our statistical treatment with both parametric and non-parametric bootstrapping experiments and observe more than 98% pairwise order agreement. For details, see §[C.3.1](https://arxiv.org/html/2511.00839v1#A3.SS3.SSS1 "C.3.1 Definitions ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +4 Results +--------- + +Table 1: Elo ratings per model per arena. + +![Image 2: [Uncaptioned image]](https://arxiv.org/html/2511.00839v1/x2.png)Figure 2: Model win rates (row beats column). Win rate is the proportion of tournaments (out of 240 240) won across all arenas. Claude Sonnet 4.5 has the highest average win rate at 69.9 69.9%. ![Image 3: [Uncaptioned image]](https://arxiv.org/html/2511.00839v1/x3.png)Figure 3: Win rates across rounds, illustrating how different models gain (Claude Sonnet 4.5) or lose momentum (GPT-5) over the course of the tournament. + +We present our main results in Table[1](https://arxiv.org/html/2511.00839v1#S4.T1 "Table 1 ‣ 4 Results ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Claude Sonnet 4.5 stands at the top, followed closely by o3 and GPT-5. After a gap of 100 100 Elo, the next best models are Claude Sonnet 4 and GPT-5 mini. Notably, no single model across dominates all arenas. Top ranked Claude Sonnet 4.5 places just 4 4 th in Poker, emphasizing the importance of CodeClash’s support for multiple arenas. Figure[3](https://arxiv.org/html/2511.00839v1#S4.F3 "Figure 3 ‣ 4 Results ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") shows win rates of specific matchups. Figure[3](https://arxiv.org/html/2511.00839v1#S4.F3 "Figure 3 ‣ 4 Results ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") reveals distinct performance trends across rounds – some models excel early before plateauing, while others improve steadily over time. + +### 4.1 Ablations + +On RobotRumble, models trail substantially behind expert human programmers. From RobotRumble’s leaderboard 3 3 3[https://robotrumble.org/boards/2](https://robotrumble.org/boards/2), we identified the top open-source submission as of October 31 31, 2025 2025, a bot called gigachad authored by entropicdrifter 4 4 4[https://robotrumble.org/entropicdrifter/gigachad](https://robotrumble.org/entropicdrifter/gigachad). We run 10 10 tournaments of 15 15 rounds of Claude Sonnet 4.5 (#1 1 on RobotRumble) versus gigachad. Throughout a tournament, gigachad remains static; no human or LM optimizes it between rounds. + +Claude Sonnet 4.5 is dominated by gigachad, winning exactly zero of the 150 150 rounds. Each round of RobotRumble, we run 250 250 simulations and determine the winner by majority. Out of 150×250=37 150\times 250=37,500 500 simulations, Claude Sonnet 4.5’s code wins zero. For explanations about where models fall short, we discuss in depth in Section[5](https://arxiv.org/html/2511.00839v1#S5 "5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Leaderboards for other arenas do not exist (Core War, RoboCode) or do not have readily open source, ranked submissions (Halite, BattleSnake, Poker). While striking, our results admittedly are drawn from a limited sample. We hope CodeClash can facilitate further exploration in human-AI dynamics (e.g., humans competing against evolving AI opponents, human-AI collaborative development). Such studies require careful experimental design and recruitment that is best left as future work. + +Models have limited capacity for opponent analysis even with transparent codebases. For each pairwise matchup among Claude 4.5 Sonnet, GPT-5, and Gemini 2.5 Pro, we run 10 10 Core War tournaments of 15 15 rounds each, with one modification – before the edit phase of round n, each player receives a read-only copy of their opponent’s code from round n-1. While the relative standings remain consistent with the default setting, the win rates change with GPT-5 securing 74.6 74.6% (+7.8+7.8%) of rounds, Claude 4.5 Sonnet at 53.2 53.2% (−1.8-1.8%), and Gemini 2.5 Pro at 22.7 22.7% (−5.5-5.5%). Curiously, GPT-5 only accesses its opponent’s codebase in 12.8 12.8% of all rounds, far fewer than Claude 4.5 Sonnet (99.3 99.3%) and Gemini 2.5 Pro (52.9 52.9%), suggesting that frequent inspection of opponent code does not necessarily translate to competitive advantage, as our analysis later in Section[5.2](https://arxiv.org/html/2511.00839v1#S5.SS2 "5.2 Strategic Reasoning Limitations ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") reaffirms. Additional insights in §[D.2](https://arxiv.org/html/2511.00839v1#A4.SS2 "D.2 Additional Ablations ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Subsequent studies could more thoroughly investigate and enhance models’ capacity for detecting opponents’ weaknesses and designing tailored counter-strategies. + +Multi-agent competitions (3 3+ players) reflect similar rankings. We run 20 20 Core War tournaments, 15 15 rounds each, with 6 6 of 8 8 models (excluding GPT-5-mini, Claude 4 Sonnet). To quantify performance, as shown in Table[42](https://arxiv.org/html/2511.00839v1#A4.F42 "Figure 42 ‣ D.2 Additional Ablations ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we use the TrueSkill rating system(Herbrich et al., [2006](https://arxiv.org/html/2511.00839v1#bib.bib23)) since Elo and win rate are limited to one-on-one settings. The results are similar to Core War ranks in Table[1](https://arxiv.org/html/2511.00839v1#S4.T1 "Table 1 ‣ 4 Results ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), with GPT-5 and Grok Code Fast (two models of similar Elo ranking) switching positions. However, the 6 6 player tournaments exhibit far more competitive volatility. Lead changes (round n winner different from round n-1) occur 48.4 48.4% of the time in 6 6 player Core War, compared to just 18.2 18.2% in the two player setting. Winners of 6 6-player tournaments capture just 28.6 28.6% of total points on average versus 78.0 78.0% in 2 2-player settings. We provide some additional insights in §[D](https://arxiv.org/html/2511.00839v1#A4 "Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). We look forward to future work that can leverage CodeClash’s multi-player tournaments as a testbed for understanding strategic behaviors such as coalition dynamics, positional play, and risk management. + +5 Analysis +---------- + +### 5.1 Competitive Dynamics + +Beyond overall win rates, we analyze how models interact with their codebases along with the resilience of models after losing individual rounds. We also investigate trends in models’ solution diversity and codebase organization across tournaments. + +![Image 4: Refer to caption](https://arxiv.org/html/2511.00839v1/x4.png) + +Figure 4: Probability of winning the next round after losing several rounds in a row. Even the highest ranking models struggle to recover after losing one or more consecutive rounds in a tournament. Numbers in parentheses indicate the overall average win rate. + +![Image 5: Refer to caption](https://arxiv.org/html/2511.00839v1/x5.png) + +Figure 5: To measure solution diversity, we compute code similarity of each model’s solutions to itself at the same round. Each data point represents the mean pairwise similarity between a model’s solution (main.py) at round n across 70 70 BattleSnake tournaments. + +Models interact with codebases in markedly different ways. CodeClash’s open-ended setting reveals striking differences in how models operate in the edit phase. For instance, while o3 and Gemini 2.5 Pro typically only edit an average of 2 2 files per round, GPT-5 usually changes 5 5 to 6 6. The size of edits also varies – on one end, o3 typically adds/removes a total of 51 51 lines per round, 8×8\times less than Qwen3 Coder or the Claude Sonnet family which usually modify more than 400 400 lines. Gemini 2.5 Pro stands out as a verbose thinker, generating an average of 105 105 words per thought, more than double the average. Claude Sonnet 4.5 usually takes 23 23 of the allotted 30 30 editing turns per round, whereas GPT-5 and o3 typically concludes after just 15 15 steps. Distributions visualizing these tendencies in §[D.1](https://arxiv.org/html/2511.00839v1#A4.SS1 "D.1 Interaction Trends ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Intriguingly, we did not find any correlations between any of these behaviors and win rates. Both minimalists (o3) and high activity editors (Claude 4.5 Sonnet) succeed. Compared to existing benchmarks that terminate upon reaching a solution, CodeClash’s multi-round competitive setting makes these distinctions even more salient. + +Even strong models struggle to recover after losing rounds. In real-world software development, early choices are often made under uncertainty: the best approach might only become clear after after testing, real world deployments, and observing competitors. Therefore, the ability to interpret noisy signals and to reconsider internal hypotheses and core design decisions is an important factor in real-world success. The round-based nature of CodeClash exposes how poorly LMs adapt once their initial strategies fail. Figure[5](https://arxiv.org/html/2511.00839v1#S5.F5 "Figure 5 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") shows that even for the Claude Sonnet 4.5, losing a single round results in a comeback probability (win probability of the next round) of less than one third — less than half of the overall round win rate of 71%. For o3, the win rate drops to only 26% after a single loss (compared to an overall round win rate of 65%). After five consecutive defeats, comeback rates fall below 15% for Claude Sonnet 4.5, and below 10% for all other models. This suggest an inability of models to reconsider strategies, or adapt to opponents or the arena state. + +Models’ solutions become increasingly diverse with every round. For each [model, opponent, round] tuple, we compute code similarity across the model’s solutions (10 10 samples) using Python’s difflib.SequenceMatcher(Ratcliff et al., [1988](https://arxiv.org/html/2511.00839v1#bib.bib48)). In other words, we have 10 10 tournaments of Claude Sonnet 4.5 vs. o3 from our main results. We then compute a similarity matrix between all 10 10 versions of Claude Sonnet 4.5’s main.py at each round 1/5/10/15 1/5/10/15, and finally calculate a mean similarity score. We run this analysis just for the BattleSnake arena since solutions are written in Python in a single main.py file. From Figure[5](https://arxiv.org/html/2511.00839v1#S5.F5 "Figure 5 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we observe models’ solutions generally become more dissimilar with every round. Each round, models are attempting to not only make absolute improvements, but also adapt to opponent play. Solution diversity varies with model (o3 at 0.63 0.63 versus GPT-5 at 0.41 0.41 at round 1 1), though the effect of the opponent’s identity is less pronounced, as we show in §[D.4](https://arxiv.org/html/2511.00839v1#A4.SS4 "D.4 Additional Analyses ‣ D.3.4 Action space analysis ‣ D.3.3 Hallucinations ‣ D.3.2 Groundedness of edits and validation of edits ‣ D.3 Analyzing trajectories using LMs as a judge ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Unlike existing code benchmarks where models quickly converge on canonical solutions, CodeClash elicits substantial creativity from models, even against the same opponent. This diversity makes CodeClash a potentially effective training ground for improving models via self-play and reinforcement learning(Zelikman et al., [2022](https://arxiv.org/html/2511.00839v1#bib.bib67)). + +![Image 6: Refer to caption](https://arxiv.org/html/2511.00839v1/x6.png) + +Figure 6: The total number of created files scales almost linear with the round. R R refers to the filename redundancy at round 15; high values indicate repeating patterns in filenames (such as main1.py, main2.py, …). + +![Image 7: Refer to caption](https://arxiv.org/html/2511.00839v1/x7.png) + +Figure 7: Models differ in the average number of _throwaway files_ (files not used after the round in which they were created). The stacked bars distinguish between files at the repository root and those in subdirectories. + +Codebases managed by models become messier over time. In most human-managed codebases, the rate of file creation quickly plateaus once the overall structure has been established; subsequent work primarily focuses on refinement, maintenance, and incremental improvements rather than continuous expansion. In contrast, we observe a markedly different trend in Figure[7](https://arxiv.org/html/2511.00839v1#S5.F7 "Figure 7 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"): the average number of agent-created files scales almost linearly with the number of rounds. Claude 4.5 Sonnet exhibits the highest file creation activity, averaging more than 30 30 files per tournament, followed by GPT-5 (21 21), whereas o3 creates fewer than 5 5. For Claude Sonnet 4.5, the high average is driven by consistent creation of various files at the repository root (making the codebase even less orderly); for GPT-5, the average is elevated by tournaments that accumulate particularly many output and temporary files in separate directories that were never cleaned up. These observations again highlight how the top three models interact with their codebases in distinctly different ways. + +When many files are produced, filenames often become repetitive and follow systematic patterns (e.g., analyze_round_13_v2.py). We quantify this effect through the _filename redundancy_ metric (the fraction of files sharing name prefixes with other files) which is particularly high for Qwen 3 Coder (59%59\%) and the Claude Sonnet models (35%35\%). In addition, most agent-created files are never referenced, reused, or modified in subsequent rounds. We quantify these _throwaway files_ in Figure[7](https://arxiv.org/html/2511.00839v1#S5.F7 "Figure 7 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"): Claude 4.5 Sonnet (18 files per tournament) and GPT-5 (15 15) again rank at the top, whereas o3 remains near the bottom. + +Together, Figure[7](https://arxiv.org/html/2511.00839v1#S5.F7 "Figure 7 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") and Figure[7](https://arxiv.org/html/2511.00839v1#S5.F7 "Figure 7 ‣ 5.1 Competitive Dynamics ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") reinforce the view that most LMs struggle to converge toward maintainable file structures over time, favoring the continual generation of new, often redundant scripts over the systematic refinement and reuse of existing code. We include more graphs along with case studies of specific codebases in §[D.4](https://arxiv.org/html/2511.00839v1#A4.SS4 "D.4 Additional Analyses ‣ D.3.4 Action space analysis ‣ D.3.3 Hallucinations ‣ D.3.2 Groundedness of edits and validation of edits ‣ D.3 Analyzing trajectories using LMs as a judge ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +### 5.2 Strategic Reasoning Limitations + +We investigate models’ capacity for self-improvement by analyzing how they interpret competition results to diagnose failures, decide what code changes to make, and how to validate them. This analysis is performed using GPT-5 with high reasoning as a judge. Details, as well as additional analyses of agent trajectories in terms of the nature of actions, are presented in Appendix[D.3](https://arxiv.org/html/2511.00839v1#A4.SS3 "D.3 Analyzing trajectories using LMs as a judge ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +![Image 8: Refer to caption](https://arxiv.org/html/2511.00839v1/x8.png) + +Figure 8: LMs struggle to analyze log files from previous rounds and frequently hallucinate about why rounds were lost. Using LM-as-a-judge, we annotate players’ trajectories with answers to three questions (a) Are changes to solutions grounded in the analysis of previous rounds or testing? (b) Are there hallucinated or unsubstantiated claims about why a round was lost? (c) Are changes validated by arena simulations or unit tests? + +Most models struggle to interpret logs or derive meaningful insights about their performance. Agents have access to detailed log records of all previous rounds, encompassing several hundred to thousands of runs against their opponent. These logs can not only reveal whether the last round’s changes improved the winning rate, but detail the exact behavior that led to losses or wins. However, despite explicit suggestions to write analysis tooling in the prompt, most LMs do not manage to extract meaningful information, often stopping at reading the first lines of a log file, or calculating the win rate of the last round. Figure[8](https://arxiv.org/html/2511.00839v1#S5.F8 "Figure 8 ‣ 5.2 Strategic Reasoning Limitations ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering")(a) shows whether the combined output of the actions of the agent (i.e., the entirety of the information available to the agent) could motivate the edits performed by the agent. While most edits of the Claude Sonnet models can be motivated in this way, the edits of all other models are ungrounded in more than 65%65\% of all rounds. Interestingly, o3 scores particularly low in this aspect, with ungrounded edits in almost 80%80\% of rounds. + +Models hallucinate during failure analysis and misinterpret logs and analysis outputs. The most salient pattern are agents infering causal explanations for arena outcomes after reviewing only the opening lines of a single log file, when these lines do not even show the deciding moment in an arena. Behaviors of this kind are quantified in Figure[8](https://arxiv.org/html/2511.00839v1#S5.F8 "Figure 8 ‣ 5.2 Strategic Reasoning Limitations ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") (b). For example, Claude Sonnet 4.5 makes uncorroborated claims about the exact reason a game was lost in more than 17%17\% of rounds on average. However, this behavior is much more pronounced in certain arenas, such as BattleSnake, where Claude Sonnet 4 and Claude Sonnet 4.5 hallucinate about loss causality in 34% and 46% of rounds. Most hallucinations are misinterpretations or over-interpretations of log files and similar outputs, though claims that cannot be connected to any source also occur. + +Models make changes without assessing their effects. When models propose algorithmic changes, they seldom confirm whether modifications work as intended or if the new solution outperforms previous iterations. The prompt explicitly suggests running arena simulations between different versions of code or writing unit tests to validate intended behavior. Combining exploratory methods with self-play could likely avoid unwanted regressions. Nevertheless, most models deploy untested code. As shown in Figure [8](https://arxiv.org/html/2511.00839v1#S5.F8 "Figure 8 ‣ 5.2 Strategic Reasoning Limitations ‣ 5 Analysis ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering")(c), only Claude Sonnet 4.5 validates changes in a majority of rounds (56%), followed by GPT-5 (50%), whereas Gemini 2.5 Pro and o3 perform meaningful validation in one out of five rounds. + +Models rarely make bash mistakes. Across all models, more than 85 85% of generated actions execute successfully, with error rates ranging from just 10 10% (Claude Sonnet 4) to 16 16% (Qwen3 Coder). Models also recover rapidly from errors: following a failed command, the very next action runs successfully more than 80 80% of the time. This stands in stark contrast to earlier findings of ”cascading failures” in agent systems(Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61); Pan et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib43)), suggesting command-line proficiency has improved substantially in recent models. These results indicate that performance differences in CodeClash stem from strategic reasoning and code quality, not bash interface capabilities. More graphs confirm this strength in §[D.1](https://arxiv.org/html/2511.00839v1#A4.SS1 "D.1 Interaction Trends ‣ Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +6 Related Works +--------------- + +Software engineering benchmarks. Early evaluations of LMs’ coding capabilities typically tasked models with completing the body of a function given its header and a brief description(Austin et al., [2021](https://arxiv.org/html/2511.00839v1#bib.bib4); Chen et al., [2021](https://arxiv.org/html/2511.00839v1#bib.bib11); Hendrycks et al., [2021](https://arxiv.org/html/2511.00839v1#bib.bib22); Liu et al., [2023](https://arxiv.org/html/2511.00839v1#bib.bib33); Jain et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib27); Zhuo et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib72)). As performance on such benchmarks has saturated, the community’s attention has shifted towards more complex, repository-level tasks, notably SWE-bench(Jimenez et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib29)). Given a GitHub issue, an LM must rewrite the codebase such that the proposed fix passes one or more unit tests. SWE-bench has since been extended in multiple directions, including evaluation(Chowdhury et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib13); Yang et al., [2024b](https://arxiv.org/html/2511.00839v1#bib.bib62); Rashid et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib47); Deng et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib16); Zan et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib66); Zhang et al., [2025c](https://arxiv.org/html/2511.00839v1#bib.bib71)), issue resolution workflows and SWE-agents(Xia et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib58); Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61); Wang et al., [2025b](https://arxiv.org/html/2511.00839v1#bib.bib55)), and datasets(Jain et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib28); Pan et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib43); Pham et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib44); Yang et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib63)). Unlike these benchmarks where the objective and often the recommended approach are explicitly specified, CodeClash offers no predetermined notion of what constitutes improved code. LMs must determine and pursue their own refinement strategies(Wang et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib56)). This adversarial setting evaluates capabilities beyond codebase manipulation, such as strategic thinking, adaptation to opponents, and long-term planning. + +Performance optimization. In lieu of unit tests, several benchmarks instead evaluate LMs on code optimization, such as boosting algorithmic efficiency(Du et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib17); Liu et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib34); Waghjale et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib53); Huang et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib26)) or reducing runtime (He et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib21); Ouyang et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib41); Press et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib45); Shetty et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib49)). Like CodeClash, how an LM goes about improving a codebase is entirely self-prescribed; there are no specific instructions or hints about methodology. Unlike CodeClash, first, LMs carry out optimizations independently; LMs’ codebases do not directly compete, nor must LMs anticipate or adapt to opponents’ strategies. Second, the objectives of existing optimization tasks are relatively narrow. In contrast, CodeClash supports diverse environments with flexible win conditions, enabling LM-based code evolution for goals beyond runtime performance. + +Game playing. Video and text games have long been used as testbeds for studying reinforcement learning agents(Mnih et al., [2015](https://arxiv.org/html/2511.00839v1#bib.bib35); Silver et al., [2016](https://arxiv.org/html/2511.00839v1#bib.bib51); OpenAI et al., [2019](https://arxiv.org/html/2511.00839v1#bib.bib39)), with a resurgence in use for evaluating LMs(Yao et al., [2020](https://arxiv.org/html/2511.00839v1#bib.bib64); Hu et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib25); Karten et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib31); Paglieri et al., [2025](https://arxiv.org/html/2511.00839v1#bib.bib42); Zhang et al., [2025a](https://arxiv.org/html/2511.00839v1#bib.bib68)). While past works have an AI system directly play a game, to our knowledge, CodeClash is the first to study the interplay of interactive coding and gaming for evaluating LMs. Furthermore, CodeClash’s task formulation aims to represent not just games, but general real-world, competitive software development, where codebases essentially compete against one another to achieve goals. + +Self improving agents. Recent work has explored how LMs can evolve agent scaffolds for better performance on software development tasks, namely SWE-bench(Wang et al., [2025a](https://arxiv.org/html/2511.00839v1#bib.bib54); Zhang et al., [2025b](https://arxiv.org/html/2511.00839v1#bib.bib70)). However, static benchmarks relying on fixed correctness metrics like unit tests are an awkward fit for prototyping self-improvement systems. Unit tests only provide binary pass/fail feedback, and once passed, they are no longer useful for further refinement. CodeClash’s competitive setting with constantly evolving opponents provides a perpetual learning signal that doesn’t saturate. Performance is graded relatively, a much richer training signal than binary correctness. We hope future work around self-improving SWE-agents will consider CodeClash as a training ground. + +7 Discussion +------------ + +Limitations and future directions. CodeClash’s code arenas are relatively smaller and more self-contained than most real-world software systems. We’d be excited to support code arenas encompassing tougher settings, where SWE-agents manage larger codebases attempting to win multiple competitive objectives (e.g., city planning, disaster preparedness, cybersecurity). Second, CodeClash uses mini-SWE-agent, reflecting our intention to focus on the evaluation of LMs by holding the agent scaffold constant. With that said, a simple next step could be to swap out mini-SWE-agent with tool-based frameworks(Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61); Wang et al., [2025b](https://arxiv.org/html/2511.00839v1#bib.bib55)) to maximize AI systems’ performance. Third, logs from the competition phase are entirely text-based. We don’t explore Vision Language Models (VLMs) in this work. Supporting multimodal feedback is on the road-map for future investigations. Finally, we are curious about the value of CodeClash’s artifacts and environments towards improving model capabilities via pre-training on traces of models’ edits or post-training with techniques like self-play and reinforcement learning. + +Conclusion. By situating LMs in tournaments where their codebases compete directly, CodeClash reveals both the creative potential and fundamental limitations of current models. Models devise remarkably diverse solutions and demonstrate technical proficiency, but struggle to draw meaningful conclusions from competition logs or maintain well-organized codebases over time. These findings offer clear avenues for future work. We hope CodeClash will serve as a reliable, extensible training ground for evaluating and building the next generation of long-running, autonomous software development systems. + +Acknowledgments +--------------- + +We thank Laude Institute, Andreessen Horowitz, and Open Philanthropy for providing funding for this work. We thank Princeton Language & Intelligence (PLI) for providing credits for running closed-source API models. Thanks to Samuel Ainsworth for his constant support of bitbop.io ([https://bitbop.io/](https://bitbop.io/)), the compute service for which this project was carried out with. We also thank Shiyi Cao, William Held, Abe (Bohan) Hou, Dacheng Li, Jeffrey J. Ma, Karthik R. Narasimhan, Yijia Shao, Chenglei Si, Zora (Zhiruo) Wang, Alexander Wettig, and Yanzhe Zhang for constructive discussions and support throughout this project. Finally, our greatest thanks to the open source development communities that created and maintain several of the competitive code arenas represented in CodeClash. + +References +---------- + +* Abramovich et al. (2025) Talor Abramovich, Meet Udeshi, Minghao Shao, Kilian Lieret, Haoran Xi, Kimberly Milner, Sofija Jancheska, John Yang, Carlos E. Jimenez, Farshad Khorrami, Prashanth Krishnamurthy, Brendan Dolan-Gavitt, Muhammad Shafique, Karthik Narasimhan, Ramesh Karri, and Ofir Press. Enigma: Interactive tools substantially assist lm agents in finding security vulnerabilities, 2025. URL [https://arxiv.org/abs/2409.16165](https://arxiv.org/abs/2409.16165). +* Anthropic (2025a) Anthropic. Claude sonnet 4.5, 2025a. URL [https://www.anthropic.com/news/claude-sonnet-4-5](https://www.anthropic.com/news/claude-sonnet-4-5). +* Anthropic (2025b) Anthropic. Claude 4 sonnet, 2025b. URL [https://www.anthropic.com/news/claude-4](https://www.anthropic.com/news/claude-4). +* Austin et al. (2021) Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. URL [https://arxiv.org/abs/2108.07732](https://arxiv.org/abs/2108.07732). +* Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. URL [https://arxiv.org/abs/2204.05862](https://arxiv.org/abs/2204.05862). +* Battlecode (2025) Battlecode. Battlecode: Mit’s premier programming competition, 2025. URL [https://battlecode.org/](https://battlecode.org/). +* Bibri & Krogstie (2020) Simon Elias Bibri and John Krogstie. The emerging data–driven smart city and its innovative applied solutions for sustainability: The cases of london and barcelona. _Energy Informatics_, 3(1):5, 2020. +* Boubdir et al. (2024) Meriem Boubdir, Edward Kim, Beyza Ermis, Sara Hooker, and Marzieh Fadaee. Elo uncovered: Robustness and best practices in language model evaluation. _Advances in Neural Information Processing Systems_, 37:106135–106161, 2024. +* Bradley & Terry (1952) Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_, 39(3/4):324–345, 1952. +* Brown & Sandholm (2019) Noam Brown and Tuomas Sandholm. Superhuman ai for multiplayer poker. _Science_, 365:885 – 890, 2019. URL [https://api.semanticscholar.org/CorpusID:195892791](https://api.semanticscholar.org/CorpusID:195892791). +* Chen et al. (2021) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_, 2021. +* Chiang et al. (2024) Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. Chatbot arena: An open platform for evaluating llms by human preference, 2024. URL [https://arxiv.org/abs/2403.04132](https://arxiv.org/abs/2403.04132). +* Chowdhury et al. (2024) Neil Chowdhury, James Aung, Chan Jun Shern, Oliver Jaffe, Dane Sherburn, Giulio Starace, Evan Mays, Rachel Dias, Marwan Aljubeh, Mia Glaese, Carlos E. Jimenez, John Yang, Kevin Liu, and Aleksander Madry. Introducing swe-bench verified, 2024. URL [https://openai.com/index/introducing-swe-bench-verified/](https://openai.com/index/introducing-swe-bench-verified/). +* Chung et al. (2020) Jonathan Chung, Anna Luo, Xavier Raffin, and Scott Perry. Battlesnake challenge: A multi-agent reinforcement learning playground with human-in-the-loop. _arXiv preprint arXiv:2007.10504_, 2020. +* Comanici et al. (2025) Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, et al. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. _arXiv preprint arXiv:2507.06261_, 2025. +* Deng et al. (2025) Xiang Deng, Jeff Da, Edwin Pan, Yannis Yiming He, Charles Ide, Kanak Garg, Niklas Lauffer, Andrew Park, Nitin Pasari, Chetan Rane, Karmini Sampath, Maya Krishnan, Srivatsa Kundurthy, Sean Hendryx, Zifan Wang, Chen Bo Calvin Zhang, Noah Jacobson, Bing Liu, and Brad Kenstler. Swe-bench pro: Can ai agents solve long-horizon software engineering tasks?, 2025. URL [https://scale.com/research/swe_bench_pro](https://scale.com/research/swe_bench_pro). Scale AI Research. +* Du et al. (2024) Mingzhe Du, Anh Tuan Luu, Bin Ji, Qian Liu, and See-Kiong Ng. Mercury: A code efficiency benchmark for code large language models, 2024. URL [https://arxiv.org/abs/2402.07844](https://arxiv.org/abs/2402.07844). +* Elo (1967) Arpad E Elo. The proposed uscf rating system, its development, theory, and applications. _Chess life_, 22(8):242–247, 1967. +* Fudenberg & Tirole (1991) Drew Fudenberg and Jean Tirole. _Game theory_. MIT press, 1991. +* Hartness (2004) Ken Hartness. Robocode: using games to teach artificial intelligence. _Journal of Computing Sciences in Colleges_, 19(4):287–291, 2004. +* He et al. (2025) Xinyi He, Qian Liu, Mingzhe Du, Lin Yan, Zhijie Fan, Yiming Huang, Zejian Yuan, and Zejun Ma. Swe-perf: Can language models optimize code performance on real-world repositories?, 2025. URL [https://arxiv.org/abs/2507.12415](https://arxiv.org/abs/2507.12415). +* Hendrycks et al. (2021) Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with apps, 2021. URL [https://arxiv.org/abs/2105.09938](https://arxiv.org/abs/2105.09938). +* Herbrich et al. (2006) Ralf Herbrich, Tom Minka, and Thore Graepel. Trueskill™: a bayesian skill rating system. _Advances in neural information processing systems_, 19, 2006. +* Hou et al. (2025) Zhen Hou, Hao Liu, Jiang Bian, Xing He, and Yan Zhuang. Enhancing medical coding efficiency through domain-specific fine-tuned large language models. _npj Health Systems_, 2(1):14, 2025. +* Hu et al. (2025) Lanxiang Hu, Qiyu Li, Anze Xie, Nan Jiang, Ion Stoica, Haojian Jin, and Hao Zhang. Gamearena: Evaluating llm reasoning through live computer games, 2025. URL [https://arxiv.org/abs/2412.06394](https://arxiv.org/abs/2412.06394). +* Huang et al. (2025) Dong Huang, Yuhao Qing, Weiyi Shang, Heming Cui, and Jie M. Zhang. Effibench: Benchmarking the efficiency of automatically generated code, 2025. URL [https://arxiv.org/abs/2402.02037](https://arxiv.org/abs/2402.02037). +* Jain et al. (2024) Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code, 2024. URL [https://arxiv.org/abs/2403.07974](https://arxiv.org/abs/2403.07974). +* Jain et al. (2025) Naman Jain, Jaskirat Singh, Manish Shetty, Liang Zheng, Koushik Sen, and Ion Stoica. R2e-gym: Procedural environments and hybrid verifiers for scaling open-weights swe agents, 2025. URL [https://arxiv.org/abs/2504.07164](https://arxiv.org/abs/2504.07164). +* Jimenez et al. (2024) Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. Swe-bench: Can language models resolve real-world github issues?, 2024. URL [https://arxiv.org/abs/2310.06770](https://arxiv.org/abs/2310.06770). +* Jones & Dewdney (1984) D.G. Jones and A.K. Dewdney. Core wars guidelines, 1984. URL [https://corewar.co.uk/standards/cwg.txt](https://corewar.co.uk/standards/cwg.txt). +* Karten et al. (2025) Seth Karten, Andy Luu Nguyen, and Chi Jin. Pokéchamp: an expert-level minimax language agent, 2025. URL [https://arxiv.org/abs/2503.04094](https://arxiv.org/abs/2503.04094). +* Kumar et al. (2025) Bhavesh Kumar, Hoang Nguyen, and Roger Jin. Husky hold’em bench. [https://huskybench.com/](https://huskybench.com/), 2025. Accessed: 2025-09-08. +* Liu et al. (2023) Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation, 2023. URL [https://arxiv.org/abs/2305.01210](https://arxiv.org/abs/2305.01210). +* Liu et al. (2024) Jiawei Liu, Songrun Xie, Junhao Wang, Yuxiang Wei, Yifeng Ding, and Lingming Zhang. Evaluating language models for efficient code generation, 2024. URL [https://arxiv.org/abs/2408.06450](https://arxiv.org/abs/2408.06450). +* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _nature_, 518(7540):529–533, 2015. +* Mündler et al. (2024) Niels Mündler, Mark Niklas Mueller, Jingxuan He, and Martin Vechev. SWT-bench: Testing and validating real-world bug-fixes with code agents. In _The Thirty-eighth Annual Conference on Neural Information Processing Systems_, 2024. URL [https://openreview.net/forum?id=9Y8zUO11EQ](https://openreview.net/forum?id=9Y8zUO11EQ). +* OpenAI (2025a) OpenAI. Gpt-5 system card, 2025a. URL [https://openai.com/index/gpt-5-system-card/](https://openai.com/index/gpt-5-system-card/). +* OpenAI (2025b) OpenAI. Openai o3 and o4-mini system card, 2025b. URL [https://openai.com/index/o3-o4-mini-system-card/](https://openai.com/index/o3-o4-mini-system-card/). +* OpenAI et al. (2019) OpenAI, :, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d.O.Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with large scale deep reinforcement learning, 2019. URL [https://arxiv.org/abs/1912.06680](https://arxiv.org/abs/1912.06680). +* Outkine & Oxer (2020) Anton Outkine and Noa Oxer. Robot rumble, 2020. URL [https://robotrumble.org/](https://robotrumble.org/). +* Ouyang et al. (2025) Anne Ouyang, Simon Guo, Simran Arora, Alex L. Zhang, William Hu, Christopher Ré, and Azalia Mirhoseini. Kernelbench: Can llms write efficient gpu kernels?, 2025. URL [https://arxiv.org/abs/2502.10517](https://arxiv.org/abs/2502.10517). +* Paglieri et al. (2025) Davide Paglieri, Bartłomiej Cupiał, Samuel Coward, Ulyana Piterbarg, Maciej Wolczyk, Akbir Khan, Eduardo Pignatelli, Łukasz Kuciński, Lerrel Pinto, Rob Fergus, Jakob Nicolaus Foerster, Jack Parker-Holder, and Tim Rocktäschel. Balrog: Benchmarking agentic llm and vlm reasoning on games, 2025. URL [https://arxiv.org/abs/2411.13543](https://arxiv.org/abs/2411.13543). +* Pan et al. (2025) Jiayi Pan, Xingyao Wang, Graham Neubig, Navdeep Jaitly, Heng Ji, Alane Suhr, and Yizhe Zhang. Training software engineering agents and verifiers with swe-gym, 2025. URL [https://arxiv.org/abs/2412.21139](https://arxiv.org/abs/2412.21139). +* Pham et al. (2025) Minh VT Pham, Huy N Phan, Hoang N Phan, Cuong Le Chi, Tien N Nguyen, and Nghi DQ Bui. Swe-synth: Synthesizing verifiable bug-fix data to enable large language models in resolving real-world bugs. _arXiv preprint arXiv:2504.14757_, 2025. +* Press et al. (2025) Ori Press, Brandon Amos, Haoyu Zhao, Yikai Wu, Samuel K. Ainsworth, Dominik Krupke, Patrick Kidger, Touqir Sajed, Bartolomeo Stellato, Jisun Park, Nathanael Bosch, Eli Meril, Albert Steppi, Arman Zharmagambetov, Fangzhao Zhang, David Perez-Pineiro, Alberto Mercurio, Ni Zhan, Talor Abramovich, Kilian Lieret, Hanlin Zhang, Shirley Huang, Matthias Bethge, and Ofir Press. Algotune: Can language models speed up general-purpose numerical programs?, 2025. URL [https://arxiv.org/abs/2507.15887](https://arxiv.org/abs/2507.15887). +* Qwen (2025) Alibaba Qwen. Qwen3 technical report, 2025. URL [https://arxiv.org/abs/2505.09388](https://arxiv.org/abs/2505.09388). +* Rashid et al. (2025) Muhammad Shihab Rashid, Christian Bock, Yuan Zhuang, Alexander Buchholz, Tim Esler, Simon Valentin, Luca Franceschi, Martin Wistuba, Prabhu Teja Sivaprasad, Woo Jung Kim, Anoop Deoras, Giovanni Zappella, and Laurent Callot. Swe-polybench: A multi-language benchmark for repository level evaluation of coding agents, 2025. URL [https://arxiv.org/abs/2504.08703](https://arxiv.org/abs/2504.08703). +* Ratcliff et al. (1988) John W Ratcliff, David E Metzener, et al. Pattern matching: The gestalt approach. _Dr. Dobb’s Journal_, 13(7):46, 1988. +* Shetty et al. (2025) Manish Shetty, Naman Jain, Jinjian Liu, Vijay Kethanaboyina, Koushik Sen, and Ion Stoica. Gso: Challenging software optimization tasks for evaluating swe-agents, 2025. URL [https://arxiv.org/abs/2505.23671](https://arxiv.org/abs/2505.23671). +* Shi et al. (2024) Wenqi Shi, Ran Xu, Yuchen Zhuang, Yue Yu, Jieyu Zhang, Hang Wu, Yuanda Zhu, Joyce Ho, Carl Yang, and May D Wang. Ehragent: Code empowers large language models for few-shot complex tabular reasoning on electronic health records. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing_, volume 2024, pp. 22315, 2024. +* Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _nature_, 529(7587):484–489, 2016. +* Truell & Spector (2016) Michael Truell and Benjamin Spector. Halite: Two sigma’s first artificial intelligence programming challenge, 2016. URL [https://github.com/HaliteChallenge/Halite](https://github.com/HaliteChallenge/Halite). +* Waghjale et al. (2024) Siddhant Waghjale, Vishruth Veerendranath, Zora Zhiruo Wang, and Daniel Fried. Ecco: Can we improve model-generated code efficiency without sacrificing functional correctness? _arXiv preprint arXiv:2407.14044_, 2024. +* Wang et al. (2025a) Wenyi Wang, Piotr Piekos, Li Nanbo, Firas Laakom, Yimeng Chen, Mateusz Ostaszewski, Mingchen Zhuge, and Jürgen Schmidhuber. Huxley-gödel machine: Human-level coding agent development by an approximation of the optimal self-improving machine, 2025a. URL [https://arxiv.org/abs/2510.21614](https://arxiv.org/abs/2510.21614). +* Wang et al. (2025b) Xingyao Wang, Boxuan Li, Yufan Song, Frank F. Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, Hoang H. Tran, Fuqiang Li, Ren Ma, Mingzhang Zheng, Bill Qian, Yanjun Shao, Niklas Muennighoff, Yizhe Zhang, Binyuan Hui, Junyang Lin, Robert Brennan, Hao Peng, Heng Ji, and Graham Neubig. Openhands: An open platform for ai software developers as generalist agents, 2025b. URL [https://arxiv.org/abs/2407.16741](https://arxiv.org/abs/2407.16741). +* Wang et al. (2024) Zhiruo Wang, Daniel Fried, and Graham Neubig. Trove: Inducing verifiable and efficient toolboxes for solving programmatic tasks, 2024. URL [https://arxiv.org/abs/2401.12869](https://arxiv.org/abs/2401.12869). +* x.ai (2025) x.ai. Grok code fast 1, 2025. URL [https://x.ai/news/grok-code-fast-1](https://x.ai/news/grok-code-fast-1). +* Xia et al. (2024) Chunqiu Steven Xia, Yinlin Deng, Soren Dunn, and Lingming Zhang. Agentless: Demystifying llm-based software engineering agents, 2024. URL [https://arxiv.org/abs/2407.01489](https://arxiv.org/abs/2407.01489). +* Yang et al. (2023a) John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback, 2023a. URL [https://arxiv.org/abs/2306.14898](https://arxiv.org/abs/2306.14898). +* Yang et al. (2023b) John Yang, Akshara Prabhakar, Shunyu Yao, Kexin Pei, and Karthik R Narasimhan. Language agents as hackers: Evaluating cybersecurity skills with capture the flag. In _Multi-Agent Security Workshop@ NeurIPS’23_, 2023b. +* Yang et al. (2024a) John Yang, Carlos E. Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, and Ofir Press. Swe-agent: Agent-computer interfaces enable automated software engineering, 2024a. URL [https://arxiv.org/abs/2405.15793](https://arxiv.org/abs/2405.15793). +* Yang et al. (2024b) John Yang, Carlos E. Jimenez, Alex L. Zhang, Kilian Lieret, Joyce Yang, Xindi Wu, Ori Press, Niklas Muennighoff, Gabriel Synnaeve, Karthik R. Narasimhan, Diyi Yang, Sida I. Wang, and Ofir Press. Swe-bench multimodal: Do ai systems generalize to visual software domains?, 2024b. URL [https://arxiv.org/abs/2410.03859](https://arxiv.org/abs/2410.03859). +* Yang et al. (2025) John Yang, Kilian Lieret, Carlos E. Jimenez, Alexander Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, and Diyi Yang. Swe-smith: Scaling data for software engineering agents, 2025. URL [https://arxiv.org/abs/2504.21798](https://arxiv.org/abs/2504.21798). +* Yao et al. (2020) Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. Keep calm and explore: Language models for action generation in text-based games, 2020. URL [https://arxiv.org/abs/2010.02903](https://arxiv.org/abs/2010.02903). +* Yao et al. (2023) Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. URL [https://arxiv.org/abs/2210.03629](https://arxiv.org/abs/2210.03629). +* Zan et al. (2025) Daoguang Zan, Zhirong Huang, Wei Liu, Hanwu Chen, Linhao Zhang, Shulin Xin, Lu Chen, Qi Liu, Xiaojian Zhong, Aoyan Li, Siyao Liu, Yongsheng Xiao, Liangqiang Chen, Yuyu Zhang, Jing Su, Tianyu Liu, Rui Long, Kai Shen, and Liang Xiang. Multi-swe-bench: A multilingual benchmark for issue resolving, 2025. URL [https://arxiv.org/abs/2504.02605](https://arxiv.org/abs/2504.02605). +* Zelikman et al. (2022) Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. _Advances in Neural Information Processing Systems_, 35:15476–15488, 2022. +* Zhang et al. (2025a) Alex L. Zhang, Thomas L. Griffiths, Karthik R. Narasimhan, and Ofir Press. Videogamebench: Can vision-language models complete popular video games?, 2025a. URL [https://arxiv.org/abs/2505.18134](https://arxiv.org/abs/2505.18134). +* Zhang et al. (2024) Andy K Zhang, Neil Perry, Riya Dulepet, Joey Ji, Celeste Menders, Justin W Lin, Eliot Jones, Gashon Hussein, Samantha Liu, Donovan Jasper, et al. Cybench: A framework for evaluating cybersecurity capabilities and risks of language models. _arXiv preprint arXiv:2408.08926_, 2024. +* Zhang et al. (2025b) Jenny Zhang, Shengran Hu, Cong Lu, Robert Lange, and Jeff Clune. Darwin godel machine: Open-ended evolution of self-improving agents, 2025b. URL [https://arxiv.org/abs/2505.22954](https://arxiv.org/abs/2505.22954). +* Zhang et al. (2025c) Linghao Zhang, Shilin He, Chaoyun Zhang, Yu Kang, Bowen Li, Chengxing Xie, Junhao Wang, Maoquan Wang, Yufan Huang, Shengyu Fu, Elsie Nallipogu, Qingwei Lin, Yingnong Dang, Saravan Rajmohan, and Dongmei Zhang. Swe-bench goes live!, 2025c. URL [https://arxiv.org/abs/2505.23419](https://arxiv.org/abs/2505.23419). +* Zhuo et al. (2025) Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, Simon Brunner, Chen Gong, Thong Hoang, Armel Randy Zebaze, Xiaoheng Hong, Wen-Ding Li, Jean Kaddour, Ming Xu, Zhihan Zhang, Prateek Yadav, Naman Jain, Alex Gu, Zhoujun Cheng, Jiawei Liu, Qian Liu, Zijian Wang, Binyuan Hui, Niklas Muennighoff, David Lo, Daniel Fried, Xiaoning Du, Harm de Vries, and Leandro Von Werra. Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions, 2025. URL [https://arxiv.org/abs/2406.15877](https://arxiv.org/abs/2406.15877). + +Appendix +-------- + +The appendix is generally structured as follows. In Section[A](https://arxiv.org/html/2511.00839v1#A1 "Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we provide some additional details about CodeClash’s infrastructure and implementation details. In Section[B](https://arxiv.org/html/2511.00839v1#A2 "Appendix B Arenas ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we include deep dive discussions into each of the arenas supported in CodeClash. Section[C](https://arxiv.org/html/2511.00839v1#A3 "Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") supplements Section[3](https://arxiv.org/html/2511.00839v1#S3 "3 Experiments ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") with additional minor details about evaluation parameters and metrics. Section[D](https://arxiv.org/html/2511.00839v1#A4 "Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering") contains additional results, analyses, and ablations about our experiments. + +Appendix A Infrastructure +------------------------- + +In this section, we provide some additional insights and discussion into the tooling and infrastructure that CodeClash uses to (1) enable LMs to edit codebases and (2) automatically run codebases against each other within the code arena. Mimicking Figure[1](https://arxiv.org/html/2511.00839v1#S1.F1 "Figure 1 ‣ 1 Introduction ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), we provide a more technically informative breakdown of the CodeClash loop in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +![Image 9: Refer to caption](https://arxiv.org/html/2511.00839v1/figures/tech-overview.png) + +Figure 9: Technical overview of a CodeClash round. Each round, during the edit phase, LMs edit their respective codebases within Docker containers, using mini-SWE-agent to facilitate multi-turn editing (Step 1). This is followed by the competition phase, where the codebases are copied the arena docker container (Step 2). The arena then runs codebases against each other, with the game-play and outcomes captured as logs (Step 3). These logs are copied into each player’s codebase before the next round begins (Step 4). + +We format our discussion of CodeClash’s infrastructure as a series of system design questions that reflects the thought processes we went through and decisions we arrived upon towards implementing CodeClash. + +How should models edit their codebases? The benefits and drawbacks around methods for how LMs interact with codebases has been investigated thoroughly by recent works(Xia et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib58); Yang et al., [2024a](https://arxiv.org/html/2511.00839v1#bib.bib61)). Inspired by both prior research insights and current, popular paradigms for AI coding tools, we wanted to ensure several key properties for how LMs should manipulate a codebase for CodeClash, which is step 1 1 in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +1. 1.LMs should be able to view execution feedback. Execution is crucial to enable models to create and use their own constructs (e.g., analysis scripts, memory systems). +2. 2.LMs should be able to interact with a codebase. A defining challenge of CodeClash is that LMs operate in a self-directed manner. Workflow-oriented approaches(Xia et al., [2024](https://arxiv.org/html/2511.00839v1#bib.bib58)) are unsuitable for our setting. Going hand-in-hand with (1), interaction is also necessary so that models can string sequences of changes together. +3. 3.LMs should operate using bash actions, not tools. As described in Yang et al. ([2024b](https://arxiv.org/html/2511.00839v1#bib.bib62)), various workflows and tools can be (un-)intentionally biased to favor particular models. Our goal is to evaluate models, not scaffolds or tools. Therefore, we decide to make LMs operate in the most “impartial” action space. This decision also leaves an opportunity for LMs to synthesize their own tools across rounds. + +Considering these points all together, we found mini-SWE-agent to be most suitable. mini-SWE-agent is a lightweight agent scaffold that allows LMs to interact with a codebase in a terminal environment. Per turn, an LM generates a bash command, then receives standard output as execution output. The combination of mini-SWE-agent and Claude 4 Opus scores 67.6 67.6% on SWE-bench Verified 5 5 5 mini-SWE-agent with Claude 4 Opus score from [swebench.com](https://arxiv.org/html/2511.00839v1/swebench.com) bash-only leaderboard., giving us confidence that the models we evaluate are capable of performing bash-only interactions with a low to non-existent rate of failures due to syntactic errors such as malformed responses or actions. + +How do we make CodeClash portable and reproducible? Following precedent established by existing interactive coding benchmarks(Yang et al., [2023a](https://arxiv.org/html/2511.00839v1#bib.bib59)), we use Docker to containerize the environments for (1) LMs to develop their respective codebases (agent containers) and (2) running codebases in the arena (arena container). No codebase edits or arena runs are ever performed on device. The only artifact created on the local machine are logs capturing tournament metadata and outcomes. + +What initial assets should a model be given? In other words, what should the starter codebase specific to each arena generally contain? To answer this, we outlined a shortlist of several behaviors and conditions that should be supported and true for any arena. + +* •LMs should be able to learn about the arena/game as extensively as it would like. We do not assume players have any prior knowledge about how the arena works. +* •LMs should be able to run the arena to understand it and perform testing. +* •LMs are provided with a simple but functional baseline strategy that demonstrates core mechanics. A player does not need to code a valid submission from scratch. + +Based on this, we make sure every codebase has the following assets: + +* •Documentation: For every arena, we were able to find source code containing arena documentation (e.g., [https://github.com/BattlesnakeOfficial/docs](https://github.com/BattlesnakeOfficial/docs)). We copy documentation into a docs/ folder for every arena’s starter codebase. +* •Arena executable: Any executables and assets needed to run a round of the arena are fully available to each player. However, the exact bash commands are not disclosed; the burden remains on the model to figure out how to use assets. +* •Working submission: Like how human participants are provided a simple, functional, and suboptimal baseline strategy, LMs are given a starter codebase that can be submitted as is. This ensures meaningful competition from the first round. + +In practice, for any arena, the starter codebases for each player and the codebase for running the competition across multiple codebases are identical. + +Per round, how many times should a competition be run? This question stems from the non-determinism that we observed in the majority of CodeClash arenas. With the exception of MIT Battlecode 2025, we found that given the same codebases and the same arena, the outcome of a single simulation is indeterminate, which is to be expected. + +In order to declare a winner with confidence, each round at step 3 3 in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), the arena runs the competition 1000 1000 times. We declare the winner as whichever player wins the most out of the 1000 1000 simulations (or declare a tie if ties are most frequent), rather than requiring a specific win percentage threshold. This approach aligns with standard practice in competitive gaming communities and avoids introducing arbitrary performance cutoffs. We concretely review how we calculate win rate and Elo in §[C.3.1](https://arxiv.org/html/2511.00839v1#A3.SS3.SSS1 "C.3.1 Definitions ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +How can models improve their codebase? A cornerstone to performing well in CodeClash is a model’s ability to understand past rounds’ outcomes, then adapt the codebase to perform better in the arena against the opponent(s). + +To encourage such behavior, both the proceedings and outcome of each simulation are logged. The precise format of the logs depends on the arena. These logs are then copied from the arena container back into the agent containers, specifically in a designated logs/ folder within the agent’s codebase, as reflected by step 4 4 in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +How the model interprets these logs or acts upon them is entirely self-driven. In the initial system prompt, we generally mention that analyzing logs might be helpful, but we do not provide any arena-specific advice on how exactly logs should be interpreted. In practice, we’ve observed a spectrum of interesting approaches. Models will directly read the raw logs, write scripts to solicit insights, or even modify the logs. More insights in §[D](https://arxiv.org/html/2511.00839v1#A4 "Appendix D Extended Results ‣ C.3.3 Statistical validation and rank stability ‣ C.3 Evaluation Metrics ‣ C.2 Tournament Configuration ‣ 2nd item ‣ C.1 mini-SWE-agent Configuration ‣ Appendix C Evaluation ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +What happens if a model’s codebase is not a valid submission? We observed during early trials that models will occasionally errantly modify a codebase such that it it no longer functions properly when run in the arena. The error modes are most frequently due to certain expectations about the codebase not holding. For instance… + +* •For Battlecode, the main bot logic should be represented entirely within a ./bot.py file that implements a turn function. +* •For Battlesnake, the bot is in main.py, which implements a move function. +* •For RoboCode, the tank bot should be defined under robots/custom/, and the code must pass compilation (javac -cp "libs/robocode.jar" robots/custom/*.java). + +We note that we do not define these constraints – these rules are reflective of the original conditions these arenas and games impose on human players and their submissions. + +To address this, we first, implement per-arena validation to check that the codebase is ready for competition. The check is run at the outset of step 3 3 in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Second, we define the following decision tree to handle situations where 1 1+ players have invalid codebases. + +* •If all player codebases are invalid, the round is declared a tie. +* •If only one player codebase is valid, that player is declared a winner. +* •If 2 2+ player codebases are valid, the competition phase is run with all valid codebases. Any invalid codebases are excluded. + +Do arenas have positional advantages, and how are such advantages accounted for? A positional advantage refers to a situation where, assuming 2 2+ players have identical codebases, one player consistently wins. We want to eliminate such advantages in CodeClash, as they unfairly affect the arena outcome in ways that are outside of a player’s control. + +To detect whether positional advantages are present in an arena, we run the aforementioned experiment – for every arena, we run a tournament with two “dummy” players that do not change the initial codebase. Each tournament is run for 25 25 rounds, and the order of players is fixed. We then check round outcomes, with the expectation that ∼50\sim 50% win rate suggests no such positional advantages are present. From this investigation, we found MIT Battlecode 2025 to be the only arena that showed evidence of positional advantage. + +However, checking for positional advantages may be tedious to repeat constantly for new arenas or when arena settings are adjusted (e.g., the map being used for Battlecode, battleField dimensions for RoboCode). Therefore, to reliably eliminate any advantage, we simply randomly shuffle the order of players with equal probability at step 3 3 in Figure[9](https://arxiv.org/html/2511.00839v1#A1.F9 "Figure 9 ‣ Appendix A Infrastructure ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"), immediately after the codebase validation step. We verified this fix by re-running the prior experiment for MIT Battlecode 2025 and found that the win rate returned back to 50 50%. + +Trajectories are tedious to parse. Reading arena logs and mini-SWE-agent editing trajectories in their raw form was extremely laborious. To make it easier to understand what has happened throughout the course of a tournament, we wrote a viewer for CodeClash logs that provides friendly visualizations of log content and automatically calculates some game statistics (e.g., p-value calculation to indicate if a round winner is statistically significant). + +Appendix B Arenas +----------------- + +This section contains arena cards describing each of code arena supported in CodeClash. Per arena, we cover the objective(s), arena mechanics, log formats, and effective strategies. We summarize all arenas supported in CodeClash in Figure[2](https://arxiv.org/html/2511.00839v1#A2.T2 "Table 2 ‣ Appendix B Arenas ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). + +Table 2: Code arenas currently implemented in CodeClash. Arenas represent a diverse landscape of objectives (e.g., eliminate opponents, accumulate money/resources), programming languages, and challenges (e.g., decipher opponent strategy from logs, decide how to adapt code, manage growing codebase). n is number of players. + +### B.1 MIT Battlecode 2025(Battlecode, [2025](https://arxiv.org/html/2511.00839v1#bib.bib6)) + +The MIT Battlecode organization is a student run group at the Massachusetts Institute of Technology that creates and hosts coding competitions. CodeClash specifically supports the 2025 edition of the competition. As described on the [website](https://battlecode.org/): + +> Battlecode is a real-time strategy game in which you will write code for an autonomous player. Your player will need to strategically manage a robot army and control how your robots work together to defeat the enemy team. + +![Image 10: Refer to caption](https://arxiv.org/html/2511.00839v1/figures/arena_card/battlecode.png) + +Figure 10: Battlecode 2025: Chromatic Conflict screen capture. The goal is to control a team of robobunnies to paint 70 70% of a map. + +1 import random + +2 from battlecode25.stubs import* + +3 turn_count,directions=0,[ + +4 + +5 def turn(): + +6 + +7 + +8 def run_tower(): + +9 + +10 + +11 def run_soldier(): + +12 + +13 + +14 def run_mopper(): + +15 + +16 + +17 def update_enemy_robots(): + +18 + +Figure 11: A Battlecode codebase must implement a core turn function that issues controls for three different kinds of units. + +What are effective strategies? Some effective approaches include efficient algorithms for path-finding/exploration, coordinating communication between agents, and finding the right balance between offensive moves (e.g., attacking, painting, destroying towers) and defensive measures (protect territory, tower placement, maintain stream of resources). + +What assets are provided in the initial codebase?run.py/ is the python script used to run players and upgrade versions. src/ is the directory meant to contain all player source code and, test/ contains all player test code. client/ contains the client and the proper executable can be found in this folder. matches/ is the output folder for match files. maps/ is the default folder for custom maps. + +What are the arena configurations? For the 2025 edition ”Chromatic Conflict”, two teams of virtual robots roam the screen, managing resources and executing different offensive strategies against each other. Two types of resources exist in the arena: Money and Paint. Money is needed to produce units, buy towers and activate economy boost patterns (called SRPs). Paint is needed to produce units, for the win condition, to resupply units with paint and to paint special patterns, which were prerequisites for acquiring SRPs and towers. There are also two kinds of soldiers: Moppers and Splashers. Moppers can attack other units without costing paint, which makes them the only unit capable of surviving indefinitely without a tower. They can also clean up enemy paint, making them essential for cleaning up enemy paint off of ally patterns. Splashers can paint over enemy paint with ally paint and are the only unit which can paint several squares at once. The last component of the arena is towers which are immobile units that can spawn units. Money and Paint Towers will passively generate the corresponding resources. Defense Towers have high damage output and generates chips upon attacking enemy units. + +How is the winner determined? The winner is the first team that is able to ”paint” 70%70\% of the map. + +How are arena logs formatted? The arena logs are written as a sequential record of the match. They begin with setup information, including which bots are playing and on which map. After that, each line corresponds to a turn, tagged with the acting player and unit, followed by the action taken (e.g., spawning a new robot, attempting to build a tower, or performing a mop swing attack). In effect, the log provides a turn-by-turn narrative: what units were created, what abilities were triggered, and how each side attempted to advance. + +### B.2 Battlesnake(Chung et al., [2020](https://arxiv.org/html/2511.00839v1#bib.bib14)) + +Battlesnake is a multi-player game, where each player’s code controls a snake operating on a grid. The arena’s rules and objectives are heavily reminiscent of the traditional snake game. The general objective is to program your snake to survive as long as possible. + +The game starts with 2 2+ snakes positioned at different quadrants of the grid. Throughout the course of the game, food pellets will pop up – if a snake consumes (moves into a cell containing) a pellet, the snake’s body gets longer by one cell. There are several ways a snake can “die”. If it collides with a wall, its own body, or another snake that is longer, the snake is eliminated. If the snake does not make a legal move on any particular turn, the game also ends. The winner is the last remaining snake, or the longest snake if multiple are alive upon the exhaustion of some turn limit. + +![Image 11: Refer to caption](https://arxiv.org/html/2511.00839v1/figures/arena_card/battlesnake.png) + +Figure 12: Battlesnake screen capture. Your code controls a snake that should find food, avoid other snakes, and survive. + +1 def info(): + +2 return{"author":"","color":"#888888"...} + +3 + +4 def start(game_state): + +5... + +6 + +7 def end(game_state): + +8... + +9 + +10 def move(game_state): + +11 + +12 return{"move":"up"} + +Figure 13: A Battlecode codebase must implement a core turn function that issues controls for three different kinds of units. + +What are effective strategies? Effective Battlesnake bots rely on strategies that balance safety, space control, and efficient movement. A common approach is to use flood-fill or area estimation to avoid moves that lead into regions with insufficient space, reducing the chance of being trapped. Pathfinding algorithms such as A* help snakes reach food or navigate safely around hazards, often incorporating penalties for risky tiles near enemy heads. Many bots also implement look-ahead search, simulating several future turns to predict collisions and maintain advantageous positioning. Finally, strong bots prioritize risk-aware heuristics, such as only engaging opponents when longer or only pursuing food when health is low. + +What assets are provided in the initial codebase? The docs/ folder serves as the full documentation hub for the Battlesnake platform, containing subdirectories such as api/, guides/, maps/, and policies/, which collectively explain how to use the Battlesnake API, configure maps, follow gameplay policies, and get started with development. It also includes Markdown files like README.md, index.md, and quickstart.md for setup instructions; rules.md detailing official game rules and snake behavior; faq.md answering common developer questions; and starter-projects.md offering templates for new Battlesnake projects. Complementing the documentation, the game/ directory contains the full Go implementation of Battlesnake’s core logic. Key source files such as board.go, ruleset.go, standard.go, and pipeline.go define how the game board is represented, how rules are enforced, and how turns are processed. Specialized variants of the game board like royale.go, solo.go, constrictor.go, and wrapped.go implement different modes. Other files in the root directory include main.py, which serves as a starter template for Battlesnake logic and helper functions, server.py for server setup and request handling, requirements.txt listing Python dependencies, and a Dockerfile for containerized deployment. + +What are the arena configurations? The Standard Arena in Battlesnake is the default game environment, adhering to the core game rules without any modifications. In this arena, the number of Battlesnakes can vary, ranging from a 1v1 match or multiple snakes competing, such as four or eight. The game board is a square grid measuring 11×11 cells, totaling 121 cells. Each cell is a discrete unit where snakes and food can occupy. The arena’s boundaries are defined by the edges of this grid, and snakes are restricted to moving within these confines. Movement is allowed in four directions: up, down, left, and right, with no diagonal movement permitted. At the start of the game, snakes are placed at random positions within the arena, and food items are similarly distributed across the grid. + +How is the winner determined? In Battlesnake, the winner is determined by being the last remaining snake on the game board. Each snake takes turns moving, loses one health point per turn, and can regain health by consuming food, which also causes the snake to grow in length. Snakes are eliminated in several ways: colliding with their own body, colliding with another snake’s body, or engaging in a head-to-head collision with another snake. In head-to-head collisions, the longer snake survives while the shorter one is eliminated. If both snakes are the same length, both are removed from the game. Players must carefully manage their health, navigate the board without running into obstacles or other snakes, and strategically consume food to survive longer than their opponents. The game continues until only one snake remains, and that snake is declared the winner. + +How are arena logs formatted? The log for a single competition run is represented as a single .jsonl file, where each line in the file is a dictionary corresponding to a single turn of the run. Each line of a Battlesnake log records the complete state of the game at a given turn. It captures the ruleset and configuration, the current turn number, the map dimensions, and the positions and attributes of all snakes (their ID, health, body coordinates, head position, and length). It also lists the placement of food and hazards at that moment, as well as the perspective of the specific snake whose API is being called. In other words, every log entry is a snapshot of the board state. + +### B.3 Core War(Jones & Dewdney, [1984](https://arxiv.org/html/2511.00839v1#bib.bib30)) + +For Core War, players write small assembly-esque programs (called a “warrior”). The programs are run in a simulated, shared virtual memory. The goal of every program is to disable all opposing programs. The ultimate objective is to be the last program standing. + +A unique facet of Core War is that the programming language, RedCode, is specific to the game. RedCode supports basic operations (e.g., mov, add, jump, compare) along with multiple addressing modes (e.g., immediate, direct, indirect). Warriors compete in the “core”, which generally is a fixed size, circular memory array that resembles main memory (RAM); . The core is represented by a simulator called MARS. The execution of the game then proceeds in cycles, where each cycle, the simulator alternates between warriors and executes on instruction per active process. If a process executes an invalid instruction or hits an illegal condition, the process dies. Warriors can also be designed to spawn additional processes with special instructions (SPL). If all of a warrior’s processes are killed, it is eliminated. Core War games are typically played a maximum number of cycles; if no warrior is eliminated by the end, the round is a draw. + +![Image 12: Refer to caption](https://arxiv.org/html/2511.00839v1/figures/arena_card/corewar.png) + +Figure 14: Core War screen capture. Your code controls a snake that should find food, avoid other snakes, and survive. + +1;redcode-94 + +2;name Dwarf + +3;author A.K.Dewdney + +4;strategy A simple warrior + +5 + +6 start add.ab + +7 mov.i bmb,@bmb + +8 jmp start + +9 bmb dat + +Figure 15: This Core War program, called Dwarf, is a minimal attacking warrior. It repeatedly increments the pointer bmb (add.ab #4, bmb), copies the dat instruction to that location (mov.i bmb, bmb), and then loops back (jmp start). The effect is that every fourth memory cell in the core is overwritten with a dat “bomb”, gradually scattering lethal instructions that kills an opponent’s processes if it is executed. + +What are effective strategies? Core War warriors typically incorporate three dimensions – offense, defense, and adaptability. A common offensive strategy is to write loops that scatter “bombs” (invalid instructions) into memory, similar to the program in Figure[B.3](https://arxiv.org/html/2511.00839v1#A2.SS3 "B.3 Core War (Jones & Dewdney, 1984) ‣ Appendix B Arenas ‣ CodeClash: Benchmarking Goal-Oriented Software Engineering"). Another approach is to write programs that replicate as much as possible to increase survival rate. An advanced warrior will usually combine such tactics. + +What assets are provided in the initial codebase? The codebase contains three main directories config/, docs/, and src/ and provides a complete Core War environment, including the assembler, simulator (virtual machine), documentation, and example warriors. In config/, different files define different configuration profiles for the pMARS simulator, allowing tournaments or simulations under multiple rule sets and tuning the VM for different “arena sizes.” The docs/ folder describes how Core War works and how to write Redcode warriors. src/ provides source code for the pMARS simulator and assembler, including files that implement the display and UI modules, core files, and configuration. + +What are the arena configurations? Core War is a game in which two or more virus-like programs fight against each other in a simulated memory space or core. Core War programs are written in an assembly language called Redcode which is interpreted by a Core War simulator or MARS (Memory Array Redcode Simulator). The object of the game is to prevent the other program(s) from executing. At the start of a match, each warrior is loaded into a random memory location. Programs take turns executing one instruction at a time. A program wins by terminating all opponents, typically by causing them to execute invalid instructions, leaving the victorious program in sole possession of the machine. + +How is the winner determined? In the standard Core War rules, the winner is determined by being the last warrior still “alive” (i.e., having at least one process still running) or the last to execute a valid “live” instruction. A warrior “dies” when it has no remaining processes left. Processes can die if they execute an invalid instruction or are overwritten. + +How are arena logs formatted? Core War logs generally report the outcomes, like which warrior survived, how many “processes” (active execution threads) they maintained, or how many cycles elapsed before the match ended. These logs don’t usually show step-by-step instruction execution, but instead give you a high-level summary of win/loss/tie, survival, and match duration. + +### B.4 Halite I(Truell & Spector, [2016](https://arxiv.org/html/2511.00839v1#bib.bib52)) + +For Halite, players write autonomous bots that battle head to head with the goal of taking over the largest share of a virtual grid. Each bot issues commands every turn to move, collect, and deposit halite — a valuable in-game resource. The objective is to maximize your halite by the end of the match while strategically navigating around opponents and avoiding collisions. Bots use their strength to gain territory, and their territory to gain strength—outmaneuvering opponents based on the relative sophistication of their code. + +A distinctive aspect of Halite is that it combines algorithmic strategy with real-time resource optimization. Players can program their bots in one of 4 languages (C, C++, OCaml, and Rust), and the game environment simulates simultaneous turns, where every decision — from choosing optimal collection routes to predicting enemy movements — can make the difference between victory and defeat. Matches are visualized in an animated replay, saved as an .hlt file, allowing players to analyze and refine their bot’s performance across different maps and opponents. + +The Halite series also includes Halite II and Halite III, follow up iterations to the initial competition with significant updates to the nature of the competition. We doubly clarify that this version of Halite described here refers specifically to Halite I, released in 2016 2016. We are planning to support Halite II and Halite III in CodeClash in the near future. + +![Image 13: Refer to caption](https://arxiv.org/html/2511.00839v1/figures/arena_card/halite.png) + +Figure 16: Halite screen capture. Your code controls a snake that should find food, avoid other snakes, and survive. + +1#include"hlt.h" + +2#define BOT_NAME"MyCBot" + +3 + +4 int main(void){ + +5 GAME game; + +6 game=GetInit(); + +7 SendInit(BOT_NAME); + +8 while(1){ + +9 GetFrame(game); + +10 for(x=0;x