diff --git "a/2503/2503.19877.md" "b/2503/2503.19877.md"
new file mode 100644--- /dev/null
+++ "b/2503/2503.19877.md"
@@ -0,0 +1,546 @@
+Title: Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators
+
+URL Source: https://arxiv.org/html/2503.19877
+
+Published Time: Wed, 26 Mar 2025 01:23:33 GMT
+
+Markdown Content:
+\addauthor
+
+gnmagenta
+
+††footnotetext: ∗Main contributors
+
+Figure 1: We investigate the effect of scaling test-time compute for evaluation (evaluation-time compute) with long CoTs: We demonstrate that enforcing the generation of additional reasoning tokens leads to improved evaluator performance ([section 3](https://arxiv.org/html/2503.19877v1#S3 "3 Experiment 1: Evaluation-time Scaling Trends ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")), and that this improvement can, in turn, be utilized to improve the generator’s problem-solving capabilities ([section 4](https://arxiv.org/html/2503.19877v1#S4 "4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")).
+
+Seungone Kim 1∗Ian Wu 2∗Jinu Lee 3∗Xiang Yue 1
+
+Seongyun Lee 4 Mingyeong Moon 2 Kiril Gashteovski 5,6 Carolin Lawrence 5
+
+Julia Hockenmaier 3 Graham Neubig 1 Sean Welleck 1
+1 CMU 2 Independent Researcher 3 UIUC 4 KAIST AI
+
+5 NEC Laboratories Europe 7 Ss.Cyril and Methodius University of Skopje
+
+{seungone, gneubig, wellecks}@cmu.edu ianyhwu.97@gmail.com
+
+{jinulee2, juliahmr}@illinois.edu
+
+###### Abstract
+
+As language model (LM) outputs get more and more natural, it is becoming more difficult than ever to evaluate their quality. Simultaneously, increasing LMs’ “thinking” time through scaling test-time compute has proven an effective technique to solve challenging problems in domains such as math and code. This raises a natural question: can an LM’s evaluation capability also be improved by spending more test-time compute? To answer this, we investigate employing reasoning models – LMs that natively generate long chain-of-thought reasoning – as evaluators. Specifically, we examine methods to leverage more test-time compute by (1) using reasoning models, and (2) prompting these models to evaluate not only the response as a whole (i.e., outcome evaluation) but also assess each step in the response separately (i.e., process evaluation). In experiments, we observe that the evaluator’s performance improves monotonically when generating more reasoning tokens, similar to the trends observed in LM-based generation. Furthermore, we use these more accurate evaluators to rerank multiple generations, and demonstrate that spending more compute at evaluation time can be as effective as using more compute at generation time in improving an LM’s problem-solving capability.1 1 1 Our code and data are publicly available at [https://github.com/prometheus-eval/scaling-evaluation-compute](https://github.com/prometheus-eval/scaling-evaluation-compute).
+
+1 Introduction
+--------------
+
+Research on language models (LMs) involves an interplay between generation and evaluation: better generators require better evaluators and better evaluators can further enhance generators. For instance, an evaluator can verify the quality of the generator’s response[[29](https://arxiv.org/html/2503.19877v1#bib.bib29), [64](https://arxiv.org/html/2503.19877v1#bib.bib64), [59](https://arxiv.org/html/2503.19877v1#bib.bib59)] or identify the parts of a generator’s response that contain mistakes[[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [63](https://arxiv.org/html/2503.19877v1#bib.bib63), [62](https://arxiv.org/html/2503.19877v1#bib.bib62)]. Furthermore, the generator’s performance can be improved by integrating the better evaluator into inference-time algorithms[[4](https://arxiv.org/html/2503.19877v1#bib.bib4), [49](https://arxiv.org/html/2503.19877v1#bib.bib49), [30](https://arxiv.org/html/2503.19877v1#bib.bib30), [46](https://arxiv.org/html/2503.19877v1#bib.bib46), [55](https://arxiv.org/html/2503.19877v1#bib.bib55)].
+
+Reasoning models have opened up a new paradigm for generation based on generating a long chain-of-thought (CoT) prior to the final response[[17](https://arxiv.org/html/2503.19877v1#bib.bib17), [11](https://arxiv.org/html/2503.19877v1#bib.bib11), [37](https://arxiv.org/html/2503.19877v1#bib.bib37), [1](https://arxiv.org/html/2503.19877v1#bib.bib1)]. Specifically, prior works have demonstrated that generating long CoT is an effective strategy for leveraging test-time compute to solve difficult tasks that conventional instruction-tuned models cannot solve[[60](https://arxiv.org/html/2503.19877v1#bib.bib60), [57](https://arxiv.org/html/2503.19877v1#bib.bib57)]. However, it remains unclear whether evaluators, like generators, can also be improved by scaling test-time compute with long CoTs. In this paper, we ask two questions: (1) can we replicate test-time scaling behavior observed in generators with evaluators that use more test-time compute? and (2) if so, can this improved evaluation ability (achieved by evaluation-time scaling) further improve generation results as well?
+
+Our main contribution in this work is a thorough examination of evaluators that use long CoT reasoning, which we refer to as reasoning evaluators. We contrast these with direct evaluators of previous work, which directly predict an evaluation score without this reasoning process. As shown in the left of [Figure 1](https://arxiv.org/html/2503.19877v1#S0.F1 "Figure 1 ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we enforce reasoning evaluators to generate more reasoning tokens by prompting them to evaluate both each step of an output solution as well as the solution as a whole. This recipe unifies techniques from prior work on step-by-step evaluation (process reward models; PRMs; [[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [51](https://arxiv.org/html/2503.19877v1#bib.bib51), [63](https://arxiv.org/html/2503.19877v1#bib.bib63)]) and outcome-based evaluation (outcome reward models; ORMs; [[4](https://arxiv.org/html/2503.19877v1#bib.bib4), [15](https://arxiv.org/html/2503.19877v1#bib.bib15), [31](https://arxiv.org/html/2503.19877v1#bib.bib31)]).
+
+We demonstrate the effectiveness of our approach across two settings. First, as shown in the middle plot of [Figure 1](https://arxiv.org/html/2503.19877v1#S0.F1 "Figure 1 ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we find that the evaluator’s performance improves monotonically as it generates more reasoning tokens. We further show that a 32B reasoning evaluator can outperform a 72B state-of-the-art PRM by a 4.5% margin on ProcessBench[[63](https://arxiv.org/html/2503.19877v1#bib.bib63)], a benchmark that measures whether an LM can identify the paragraph containing the first error within a given response. This is notable because while existing direct evaluators train on large amounts of process labels, our reasoning evaluators can achieve strong performance through evaluation-time scaling alone without training.
+
+Second, as shown in the right plot of [Figure 1](https://arxiv.org/html/2503.19877v1#S0.F1 "Figure 1 ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we find that evaluation-time scaling is an effective method for further improving the generator’s performance. Concretely, when integrating reasoning evaluators into Best-of-N 𝑁 N italic_N sampling, where an evaluator reranks multiple solutions sampled by a generator, our reasoning evaluators using Best-of-8 outperform direct evaluators (e.g., ORMs, PRMs) using Best-of-64 by a 4.30% to 6.63% margin within a fixed compute budget. This highlights the benefits of spending more test-time compute for evaluation in addition to sampling more responses. Notably, we show that our approach is particularly effective in settings where there are insufficient process labels to train a direct evaluator (e.g., PRMs for verifying code correctness).
+
+2 Methodology
+-------------
+
+In this section, we describe our approach for scaling evaluation-time compute by assessing both overall responses ([subsection 2.1](https://arxiv.org/html/2503.19877v1#S2.SS1 "2.1 Reasoning Outcome Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")) and individual response segments ([subsection 2.2](https://arxiv.org/html/2503.19877v1#S2.SS2 "2.2 Reasoning Process Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")). We then explain how we combine the process and outcome judgments ([subsection 2.3](https://arxiv.org/html/2503.19877v1#S2.SS3 "2.3 Combining outcome judgments and process judgments ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")) to further improve evaluation.
+
+#### Reasoning Evaluators vs. Direct Evaluators
+
+We refer to conventional evaluators that are trained to map a problem and a response (or steps) to a scalar value score as _direct evaluators_. Reasoning evaluators differ from direct evaluators in three aspects. First, reasoning evaluators generate chain-of-thought (CoT) reasoning before predicting the final judgment. Second, while direct evaluators typically use a specially trained reward modeling head (i.e., a linear layer that outputs a scalar value), reasoning evaluators can use a language modeling head (i.e., a linear layer that outputs a probability distribution over tokens in the vocabulary) to generate both the CoT and the final judgment 2 2 2 An exception is Ankner et al. [[2](https://arxiv.org/html/2503.19877v1#bib.bib2)] where they train an evaluator that uses the language modeling head to generate a CoT and a reward modeling head to predict the score. Other works such as Zhang et al. [[61](https://arxiv.org/html/2503.19877v1#bib.bib61)] and Mahan et al. [[34](https://arxiv.org/html/2503.19877v1#bib.bib34)] only use the language modeling head for both generating the CoT and the judgment., with the logit values of the LM head used to acquire a scalar value score. Lastly, while direct evaluators with a specialized reward modeling head must be trained, reasoning evaluators may either be specifically trained for evaluation or may be off-the-shelf LMs that are prompted to act as evaluators. In this paper, we focus on the latter approach by prompting reasoning models to function as evaluators.
+
+#### Notation
+
+Given a problem x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and a corresponding response y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, the objective of an evaluator is to estimate the goodness of y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT by generating a score s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The mapping function of an outcome evaluator and a process evaluator can be expressed as:
+
+* •Outcome Evaluator: (x i,y i)→s i→subscript 𝑥 𝑖 subscript 𝑦 𝑖 subscript 𝑠 𝑖(x_{i},y_{i})\rightarrow s_{i}( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) → italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT
+* •Process Evaluator: (x i,[y i1,y i2,…,y iN])→[s i1,s i2,…,s iN]→subscript 𝑥 𝑖 subscript 𝑦 𝑖 1 subscript 𝑦 𝑖 2…subscript 𝑦 𝑖 𝑁 subscript 𝑠 𝑖 1 subscript 𝑠 𝑖 2…subscript 𝑠 𝑖 𝑁(x_{i},[y_{i1},y_{i2},...,y_{iN}])\rightarrow[s_{i1},s_{i2},...,s_{iN}]( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , [ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ] ) → [ italic_s start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ].
+
+Process evaluators also require an additional splitting function to divide y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into [y i1,y i2,…,y iN]subscript 𝑦 𝑖 1 subscript 𝑦 𝑖 2…subscript 𝑦 𝑖 𝑁[y_{i1},y_{i2},...,y_{iN}][ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ]. Further, process evaluators can also be used for evaluation of the final outcome if we provide an aggregation function that maps [s i1,s i2,…,s iN]subscript 𝑠 𝑖 1 subscript 𝑠 𝑖 2…subscript 𝑠 𝑖 𝑁[s_{i1},s_{i2},...,s_{iN}][ italic_s start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ] to s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Conventionally, a heuristic-based approach is used as the splitting function (e.g., splitting based on “\\\backslash\n\\\backslash\n” or using an LM to generate some structured output “[STEP 1] … [STEP N] …”) while the min function (i.e., s i=min(s i1,s i2,…,s iN)subscript 𝑠 𝑖 𝑚 𝑖 𝑛 subscript 𝑠 𝑖 1 subscript 𝑠 𝑖 2…subscript 𝑠 𝑖 𝑁 s_{i}=min(s_{i1},s_{i2},...,s_{iN})italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_m italic_i italic_n ( italic_s start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT )) is often used as the aggregation function[[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [51](https://arxiv.org/html/2503.19877v1#bib.bib51), [46](https://arxiv.org/html/2503.19877v1#bib.bib46)]. As alluded to in the previous paragraph, direct outcome and process evaluators typically predict these values directly through specially trained heads. In the following subsections we discuss how to make these predictions using reasoning models.
+
+
+
+Figure 2: Illustration of reasoning evaluators: We propose scaling evaluation-time compute by combining outcome evaluation and process evaluation with reasoning models as evaluators.
+
+### 2.1 Reasoning Outcome Evaluators
+
+#### Method
+
+As illustrated in the upper section of [Figure 2](https://arxiv.org/html/2503.19877v1#S2.F2 "Figure 2 ‣ Notation ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") (light red color), our reasoning outcome evaluators have at their core a function:
+
+(x i,y i)→(c i,j i),→subscript 𝑥 𝑖 subscript 𝑦 𝑖 subscript 𝑐 𝑖 subscript 𝑗 𝑖(x_{i},y_{i})\rightarrow(c_{i},j_{i}),( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) → ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ,(1)
+
+where c i subscript 𝑐 𝑖 c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes a CoT, and j i subscript 𝑗 𝑖 j_{i}italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the evaluator’s judgment, represented as the probability distribution over tokens in the vocabulary. In our experiments, we prompt the LM to output “1” if the response is deemed to be correct and “0” if not. Hence, to transform j i subscript 𝑗 𝑖 j_{i}italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into a scalar value score s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we use the logits ℓ ℓ\ell roman_ℓ of “1” and “0” tokens and perform a softmax operation
+
+s i=e ℓ(j i=1)e ℓ(j i=0)+e ℓ(j i=1).subscript 𝑠 𝑖 superscript 𝑒 ℓ subscript 𝑗 𝑖 1 superscript 𝑒 ℓ subscript 𝑗 𝑖 0 superscript 𝑒 ℓ subscript 𝑗 𝑖 1 s_{i}=\frac{e^{\ell(j_{i}=1)}}{e^{\ell(j_{i}=0)}+e^{\ell(j_{i}=1)}}.italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = divide start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0 ) end_POSTSUPERSCRIPT + italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG .(2)
+
+### 2.2 Reasoning Process Evaluators
+
+#### Method
+
+As illustrated in the bottom section of [Figure 2](https://arxiv.org/html/2503.19877v1#S2.F2 "Figure 2 ‣ Notation ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") (colored in light blue), we formulate the mapping function of assessing each reasoning step k 𝑘 k italic_k using a reasoning process evaluator as:
+
+(x i,[y i1,…,y ik])→(c ik,j ik)(1≤k≤N)→subscript 𝑥 𝑖 subscript 𝑦 𝑖 1…subscript 𝑦 𝑖 𝑘 subscript 𝑐 𝑖 𝑘 subscript 𝑗 𝑖 𝑘 1 𝑘 𝑁(x_{i},[y_{i1},...,y_{ik}])\rightarrow(c_{ik},j_{ik})\;\;\;\;(1\leq k\leq N)( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , [ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ] ) → ( italic_c start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT , italic_j start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ) ( 1 ≤ italic_k ≤ italic_N )(3)
+
+where c ik subscript 𝑐 𝑖 𝑘 c_{ik}italic_c start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT denotes the CoT that examines y ik subscript 𝑦 𝑖 𝑘 y_{ik}italic_y start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT for potential logical flaws or inconsistencies and j i subscript 𝑗 𝑖 j_{i}italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the judgment for y ik subscript 𝑦 𝑖 𝑘 y_{ik}italic_y start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT, which is also represented as a probability distribution. Note that the previous steps [y i1,…,y i(k−1)]subscript 𝑦 𝑖 1…subscript 𝑦 𝑖 𝑘 1[y_{i1},...,y_{i(k-1)}][ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i ( italic_k - 1 ) end_POSTSUBSCRIPT ] are provided as additional context for precise assessment of the current step y ik subscript 𝑦 𝑖 𝑘 y_{ik}italic_y start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT.
+
+Consequently, we convert j ik subscript 𝑗 𝑖 𝑘 j_{ik}italic_j start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT into s ik subscript 𝑠 𝑖 𝑘 s_{ik}italic_s start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT, similarly to [Equation 2](https://arxiv.org/html/2503.19877v1#S2.E2 "2 ‣ Method ‣ 2.1 Reasoning Outcome Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"):
+
+[s i1,…,s iN]=[e ℓ(j i1=1)e ℓ(j i1=0)+e ℓ(j i1=1),…,e ℓ(j iN=1)e ℓ(j iN=0)+e ℓ(j iN=1)]subscript 𝑠 𝑖 1…subscript 𝑠 𝑖 𝑁 superscript 𝑒 ℓ subscript 𝑗 𝑖 1 1 superscript 𝑒 ℓ subscript 𝑗 𝑖 1 0 superscript 𝑒 ℓ subscript 𝑗 𝑖 1 1…superscript 𝑒 ℓ subscript 𝑗 𝑖 𝑁 1 superscript 𝑒 ℓ subscript 𝑗 𝑖 𝑁 0 superscript 𝑒 ℓ subscript 𝑗 𝑖 𝑁 1[s_{i1},...,s_{iN}]=\left[\frac{e^{\ell(j_{i1}=1)}}{e^{\ell(j_{i1}=0)}+e^{\ell% (j_{i1}=1)}},...,\frac{e^{\ell(j_{iN}=1)}}{e^{\ell(j_{iN}=0)}+e^{\ell(j_{iN}=1% )}}\right][ italic_s start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ] = [ divide start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT = 0 ) end_POSTSUPERSCRIPT + italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG , … , divide start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT = 0 ) end_POSTSUPERSCRIPT + italic_e start_POSTSUPERSCRIPT roman_ℓ ( italic_j start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT = 1 ) end_POSTSUPERSCRIPT end_ARG ](4)
+
+#### Single- vs. multi-step evaluation
+
+The above formulation forms the core of our proposed method, but in ablations we also compare with an ablated _single-step_ reasoning process evaluator proposed by Zheng et al. [[63](https://arxiv.org/html/2503.19877v1#bib.bib63)] that generates a single CoT before making judgements about all N 𝑁 N italic_N reasoning steps in y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT:
+
+(x i,[y i1,…,y ik])→(c i,[j i1,…,j iN]).→subscript 𝑥 𝑖 subscript 𝑦 𝑖 1…subscript 𝑦 𝑖 𝑘 subscript 𝑐 𝑖 subscript 𝑗 𝑖 1…subscript 𝑗 𝑖 𝑁(x_{i},[y_{i1},...,y_{ik}])\rightarrow(c_{i},[j_{i1},\ldots,j_{iN}]).( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , [ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ] ) → ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , [ italic_j start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_j start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ] ) .(5)
+
+Evaluating each step separately is our preferred method for two reasons: (1) evaluating all N 𝑁 N italic_N steps at once could exceed the context window of the reasoning models, and (2) stepwise evaluation forces the evaluator to assess each step more thoroughly, thereby naturally scaling evaluation-time compute.
+
+#### Choice of splitting function and aggregation function
+
+Additionally, we make the following adjustments to the splitting and aggregation functions when using reasoning process evaluators:
+
+* •Model-based splitting: When splitting y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into [y i1,…,y iN]subscript 𝑦 𝑖 1…subscript 𝑦 𝑖 𝑁[y_{i1},...,y_{iN}][ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ], conventional heuristic-based approaches may be ineffective for exceptional cases (e.g., when y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT does not include “\\\backslash\n\\\backslash\n” or is not written in a structured output format)3 3 3 This is important when assessing code outputs that lack identifiable trigger phrases for segmentation.. To deal with this issue, we instead adopt a model-based splitting approach where an LM M split subscript 𝑀 𝑠 𝑝 𝑙 𝑖 𝑡 M_{split}italic_M start_POSTSUBSCRIPT italic_s italic_p italic_l italic_i italic_t end_POSTSUBSCRIPT is prompted to insert an indicator phrase “[SPLIT]” between each step:
+
+M split:y i→[y i1[SPLIT]y i2[SPLIT]…[SPLIT]y iN]:subscript 𝑀 𝑠 𝑝 𝑙 𝑖 𝑡→subscript 𝑦 𝑖 delimited-[]subscript 𝑦 𝑖 1[SPLIT]subscript 𝑦 𝑖 2[SPLIT]…[SPLIT]subscript 𝑦 𝑖 𝑁 M_{split}:y_{i}\rightarrow[y_{i1}\;\texttt{[SPLIT]}\;y_{i2}\;\texttt{[SPLIT]}% \;...\;\texttt{[SPLIT]}\;y_{iN}]italic_M start_POSTSUBSCRIPT italic_s italic_p italic_l italic_i italic_t end_POSTSUBSCRIPT : italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT → [ italic_y start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT [SPLIT] italic_y start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT [SPLIT] … [SPLIT] italic_y start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ](6)
+* •Score aggregation: After acquiring [s i1,s i2,…,s iN]subscript 𝑠 𝑖 1 subscript 𝑠 𝑖 2…subscript 𝑠 𝑖 𝑁[s_{i1},s_{i2},...,s_{iN}][ italic_s start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_i italic_N end_POSTSUBSCRIPT ] as in Equation[4](https://arxiv.org/html/2503.19877v1#S2.E4 "In Method ‣ 2.2 Reasoning Process Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we aggregate the N 𝑁 N italic_N judgments into a single scalar value score s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. In our experiments, we find that the mean _ _\_ _ logit function[[46](https://arxiv.org/html/2503.19877v1#bib.bib46)] yields better results than the min function. The mean _ _\_ _ logit function can be expressed as: s i=σ(Σ ks ik 1−s ik N)(1≤k≤N)subscript 𝑠 𝑖 𝜎 subscript Σ 𝑘 subscript 𝑠 𝑖 𝑘 1 subscript 𝑠 𝑖 𝑘 𝑁 1 𝑘 𝑁 s_{i}=\sigma\left(\frac{\Sigma_{k}{\frac{s_{ik}}{1-s_{ik}}}}{N}\right)\;\;(1% \leq k\leq N)italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_σ ( divide start_ARG roman_Σ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT divide start_ARG italic_s start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT end_ARG start_ARG 1 - italic_s start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT end_ARG end_ARG start_ARG italic_N end_ARG ) ( 1 ≤ italic_k ≤ italic_N )(7)
+
+Note that these adjustments can be applied to direct process evaluators as well. We include the results of related ablation experiments in [Appendix A](https://arxiv.org/html/2503.19877v1#A1 "Appendix A Translating Improved Evaluation to Problem-solving (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+
+### 2.3 Combining outcome judgments and process judgments
+
+At a high level, the objective of outcome evaluation is to determine the correctness of the final answer, while the objective of process evaluation is to determine the correctness of each step. Both have their advantages in identifying reasoning errors – outcome evaluation takes a more holistic approach while process evaluation spends more compute on evaluating each individual step, thus potentially identifying more fine-grained errors.
+
+We consider combining both outcome and process evaluation scores as follows:
+
+s final=α×s outcome+(1−α)×s process.subscript 𝑠 𝑓 𝑖 𝑛 𝑎 𝑙 𝛼 subscript 𝑠 𝑜 𝑢 𝑡 𝑐 𝑜 𝑚 𝑒 1 𝛼 subscript 𝑠 𝑝 𝑟 𝑜 𝑐 𝑒 𝑠 𝑠 s_{final}=\alpha\times s_{outcome}+(1-\alpha)\times s_{process}.italic_s start_POSTSUBSCRIPT italic_f italic_i italic_n italic_a italic_l end_POSTSUBSCRIPT = italic_α × italic_s start_POSTSUBSCRIPT italic_o italic_u italic_t italic_c italic_o italic_m italic_e end_POSTSUBSCRIPT + ( 1 - italic_α ) × italic_s start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c italic_e italic_s italic_s end_POSTSUBSCRIPT .(8)
+
+Here, choosing α=0 𝛼 0\alpha=0 italic_α = 0 is identical to only using the process score and choosing α=1 𝛼 1\alpha=1 italic_α = 1 is identical to using only the outcome score. In our experiments, we use α=0.5 𝛼 0.5\alpha=0.5 italic_α = 0.5 and refer to this method as reasoning process + outcome evaluation. See [Appendix D](https://arxiv.org/html/2503.19877v1#A4 "Appendix D Further analyses on Best-of-𝑁 performance ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for analyses on the effects of varying α 𝛼\alpha italic_α.
+
+3 Experiment 1: Evaluation-time Scaling Trends
+----------------------------------------------
+
+#### Motivation
+
+As mentioned in the previous [section 2](https://arxiv.org/html/2503.19877v1#S2 "2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), direct evaluators and reasoning evaluators sequentially increase the number of reasoning tokens generated at test-time, which makes for a good testbed to examine the effect of evaluation-time scaling. In this section, we examine what trends emerge when we increase evaluation-time compute to detect logical errors in the generator’s response.
+
+### 3.1 Experimental setting
+
+#### Benchmark
+
+We explore scaling evaluation-time compute with the meta-evaluation benchmark ProcessBench[[63](https://arxiv.org/html/2503.19877v1#bib.bib63)]. In ProcessBench, evaluators are tasked with identifying the first paragraph in the solution that contains incorrect logic, if any. We choose to use this dataset because (1) it includes human-annotated labels indicating whether any given step within the response is correct (and istherefore suitable for testing our process evaluator baselines) and (2) it includes responses sampled from multiple LMs, making it a useful and general testbed (i.e., not specific to a particular generator) for evaluation capabilities. ProcessBench contains 3,400 data points, with queries sampled from 4 different math benchmarks and responses from 12 distinct LMs. Each response consists of 7.56 steps on average.
+
+#### Metric
+
+Performance on ProcessBench is measured using the F1 score, computed from the precision and recall of predicting the index of the first paragraph that contains a logical error. For example, evaluators are penalized for incorrectly identifying a paragraph as erroneous when no error exists, misidentifying the index of the erroneous paragraph, or failing to detect an error when one is present. If the evaluator predicts that one or more steps are incorrect, we use the earliest incorrect step (the step with the smallest index) as the final prediction for calculation.
+
+#### Methods
+
+We consider the following evaluation methods for ProcessBench, arranged in order of increasing evaluation-time compute requirements:
+
+* •Direct process evaluator: We employ process reward models (PRMs) as direct process evaluators. Note that these models do not generate CoTs but instead directly predict the correctness of all steps.
+* •Single-step reasoning process evaluator: We adopt the approach proposed by Zheng et al. [[63](https://arxiv.org/html/2503.19877v1#bib.bib63)] and Zhang et al. [[62](https://arxiv.org/html/2503.19877v1#bib.bib62)], where a language model is provided with the full solution and prompted to produce CoT as well as the index of the first paragraph containing a logical error, if one exists. This roughly corresponds to our ablated “single-step” process evaluator in [subsection 2.2](https://arxiv.org/html/2503.19877v1#S2.SS2 "2.2 Reasoning Process Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+* •Reasoning process evaluator: Lastly, we explore our core technique for using reasoning models as process evaluators. This involves assessing each segment individually and determining its correctness. Note that unlike the approach described in [subsection 2.2](https://arxiv.org/html/2503.19877v1#S2.SS2 "2.2 Reasoning Process Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), (1) splitting is not required as segments are provided as part of the ProcessBench responses and (2) an aggregation function is not required as the final goal is to predict the index of the segment containing the first mistake, not an outcome score.
+
+#### Models
+
+We examine three varities of models.
+
+* •Direct PRMs: We experiment with 10 different direct PRMs, representing the state-of-the-art on ProcessBench, from families including math-shepherd-mistral[[51](https://arxiv.org/html/2503.19877v1#bib.bib51)], Skywork[[38](https://arxiv.org/html/2503.19877v1#bib.bib38)], RLHFlow, EurusPRM[[6](https://arxiv.org/html/2503.19877v1#bib.bib6)], and Qwen2.5-Math-PRM[[62](https://arxiv.org/html/2503.19877v1#bib.bib62)].
+* •Instruction-tuned Models: These are models that have been trained using supervised fine tuning and/or RLHF, but have not been explicitly trained for reasoning. We experiment with models from the Llama-3.1[[32](https://arxiv.org/html/2503.19877v1#bib.bib32)], Llama-3.3[[32](https://arxiv.org/html/2503.19877v1#bib.bib32)], and Qwen2.5[[47](https://arxiv.org/html/2503.19877v1#bib.bib47)] families.
+* •Reasoning Models: These are models that have been explicitly trained to perform reasoning using RL, or distilled from models trained to perform reasoning. We examine models from the DeepSeek-R1-Distill-Qwen[[11](https://arxiv.org/html/2503.19877v1#bib.bib11)] and QwQ[[48](https://arxiv.org/html/2503.19877v1#bib.bib48)] families.
+
+#### Matching test-time budget across methods
+
+In order to facilitate a fair comparison between single-step evaluators and our method under similar test-time compute constraints, we also test self-consistency[[52](https://arxiv.org/html/2503.19877v1#bib.bib52)] for evaluation. In this approach, the evaluator generates M 𝑀 M italic_M CoT trajectories (e.g., reasoning process evaluators assess N 𝑁 N italic_N steps, with each step evaluated M 𝑀 M italic_M times, resulting in a total of N×M 𝑁 𝑀 N\times M italic_N × italic_M inference steps), with the final prediction chosen based on majority vote. Note that since direct PRMs do not generate a CoT and produce deterministic scores, self-consistency is inapplicable.
+
+Table 1: Prompting an LM to act as a reasoning process evaluator is an effective method that leverages test-time compute to significantly enhance evaluation capability. Δ Δ\Delta roman_Δ denotes gains associated with converting from reasoning outcome evaluators to reasoning process evaluators with the same evaluator LM. The best performance within each category is bolded, with ††\dagger† indicating that scores are taken from Zhang et al. [[62](https://arxiv.org/html/2503.19877v1#bib.bib62)].
+
+### 3.2 Experimental results
+
+The experimental results on ProcessBench are presented in [Table 1](https://arxiv.org/html/2503.19877v1#S3.T1 "Table 1 ‣ Matching test-time budget across methods ‣ 3.1 Experimental setting ‣ 3 Experiment 1: Evaluation-time Scaling Trends ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+
+#### Reasoning models are better evaluators than instruction-tuned models
+
+First, we find that reasoning models show stronger evaluation performance compared to vanilla instruction-tuned models. For example, DeepSeek-R1-Distill-Qwen-32B achieves an average F1 score of 75.5 when prompted as a reasoning outcome evaluator, significantly outperforming the larger Qwen2.5-72B-Instruct model (61.2 F1), despite having only 44% as many parameters. This suggests that a model’s ability to generate long CoT enhances its evaluation capabilities, even if it is not explicitly trained for evaluation.
+
+#### Single-step methods do not match the performance of specially trained direct process evaluators
+
+The performance of single-step reasoning evaluators trails that of direct process evaluators. For example, DeepSeek-R1-Distill-Qwen-7B achieves an F1 score of 54.5 when prompted as a reasoning outcome evaluator, whereas Qwen2.5-Math-7B-PRM800K achieves 56.2 and Qwen2.5-Math-PRM-7B obtains 73.5. Similarly, DeepSeek-R1-Distill-Qwen-32B (75.5) falls slightly behind Qwen2.5-Math-PRM-72B (78.3). This suggests that single-step evaluation with reasoning models alone is not enough to identify reasoning errors within the given response.
+
+#### Increasing test-time compute through process evaluation boosts evaluation capabilities
+
+We compare two approaches for scaling evaluation-time compute: (1) self-consistency, where the evaluator generates multiple CoTs for each response and the final prediction is decided via majority vote and (2) reasoning process evaluation, where the evaluator is prompted to assess each step in the solution individually. The key difference between these approaches is whether the evaluator assesses the entire response multiple times or each step individually, with both approaches requiring multiple inference calls. As ProcessBench has an average of 7.56 steps and we generate 8 CoTs for the self-consistency baseline, the two approaches incur similar inference costs.
+
+Our results indicate that prompting reasoning models to evaluate each step is more effective than applying self-consistency given a fixed evaluation-time compute budget: DeepSeek-R1-Distill-Qwen-7B, QwQ-32B-Preview, and DeepSeek-R1-Distill-Qwen-32B achieve F1 scores of 64.8, 75.3, and 78.6 respectively through reasoning process evaluation, outperforming the corresponding self-consistency scores of 60.9, 71.5, and 77.8. Notably, reasoning process evaluation enables DeepSeek-R1-Distill-Qwen-32B (78.6) to outperform the direct process evaluator Qwen2.5-Math-PRM-72B (78.3) – a model nearly twice as big – without requiring any additional training. Furthermore, prompting instruction-tuned models to act as reasoning process evaluators also boosts performance, with Llama-3.1-8B-Instruct achieving a +7.3 (14.8 →→\rightarrow→ 22.1) and Qwen2.5-32B-Instruct a +15.0 (45.0 →→\rightarrow→ 60.0) gain, although reasoning models still outperform instruction-tuned models of comparable sizes.
+
+#### Combining self-consistency and reasoning process evaluation can further enhance performance
+
+We find that combining self-consistency with reasoning process evaluation is an effective way of further scaling evaluation-time compute. Notably, DeepSeek-R1-Distill-Qwen-7B achieves an F1 score of 73.7, outperforming the Qwen2.5-Math-PRM-7B direct process evaluation baseline (73.5). DeepSeek-R1-Distill-Qwen-32B meanwhile obtains an F1 score of 82.8, surpassing the Qwen2.5-Math-PRM-72B baseline (78.3). These scores significantly surpass the performance of GPT-4-0806 (61.9) and are comparable to o1-mini (87.9) as reasoning outcome evaluators.
+
+#### Final Remark
+
+In summary, just as prior work has shown that increasing test-time compute can improve a generator’s problem-solving capabilities, we demonstrate that an evaluator’s performance also improves monotonically with increased test-time compute, even without additional training.
+
+4 Experiment 2: Translating Improved Evaluation to Problem-solving
+------------------------------------------------------------------
+
+#### Motivation
+
+Next, we examine whether scaling evaluation-time compute can effectively improve the generator’s problem-solving capabilities. Conventionally, test-time scaling has been performed on the generator side by sampling more response candidates. In contrast, we focus on scaling evaluator compute while using less generator compute (i.e., sampling fewer response candidates).
+
+### 4.1 Experimental setting
+
+#### Benchmarks
+
+We adopt the Best-of-N 𝑁 N italic_N experimental setting of Cui et al. [[6](https://arxiv.org/html/2503.19877v1#bib.bib6)]: we utilize three LMs as generators, namely Eurus-2-SFT[[6](https://arxiv.org/html/2503.19877v1#bib.bib6)], Llama3.1-70B-Instruct[[32](https://arxiv.org/html/2503.19877v1#bib.bib32)], and Qwen2.5-7B-Instruct[[58](https://arxiv.org/html/2503.19877v1#bib.bib58)]. Using these LMs, we generate 64 responses per instance across seven benchmarks (AIME24, AMC23, Minerva Math[[27](https://arxiv.org/html/2503.19877v1#bib.bib27)], OlympiadBench[[12](https://arxiv.org/html/2503.19877v1#bib.bib12)], MATH500[[13](https://arxiv.org/html/2503.19877v1#bib.bib13)], LeetCode[[10](https://arxiv.org/html/2503.19877v1#bib.bib10)], and GPQA[[41](https://arxiv.org/html/2503.19877v1#bib.bib41)]), covering a total of 4,680 instances and 299,520 responses. In the Best-of-N 𝑁 N italic_N setting, evaluators are used to assess and rank the N 𝑁 N italic_N responses, with the highest-scoring response chosen as the final prediction.
+
+#### Metrics
+
+For LeetCode, we report pass@1, which measures whether a response passes all test cases. For the remaining benchmarks, we report accuracy scores, which measures whether a response is correct. We report the total average across 3 generators and 7 benchmarks, resulting in 21 settings.
+
+#### Methods
+
+We consider the following Best-of-N 𝑁 N italic_N evaluation methods, arranged in order of increasing evaluation-time compute requirements:
+
+* •Best-of-1: We measure model performance when generating only a single response per problem. This acts our naive baseline for comparing relative improvements across different evaluators.
+* •Direct outcome evaluator: We use direct outcome evaluators (ORMs) for Best-of-N 𝑁 N italic_N reranking. We experiment with Skywork-Reward-Llama-3.1-8B-v0.2[[31](https://arxiv.org/html/2503.19877v1#bib.bib31)] and Skywork-Reward-Gemma-2-27B-v0.2[[31](https://arxiv.org/html/2503.19877v1#bib.bib31)], which are the state-of-the-art direct outcome evaluators on RewardBench[[25](https://arxiv.org/html/2503.19877v1#bib.bib25)] (a widely used benchmark to assess ORMs).
+* •Direct process evaluator: We utilize direct process evaluators (PRMs) to assess each individual step within a response. The step-level judgments are then aggregated into a final reward score, which is used for reranking. We split and aggregate responses based on the heuristics detailed in each PRM’s official GitHub repository. We experiment with Qwen2.5-Math-PRM-7B and Qwen2.5-Math-PRM-72B, which scored the highest among comparably sized PRMs in [section 3](https://arxiv.org/html/2503.19877v1#S3 "3 Experiment 1: Evaluation-time Scaling Trends ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+* •Reasoning outcome evaluator: We prompt the reasoning models DeepSeek-R1-Distill-Qwen-
+
+7B and DeepSeek-R1-Distill-Qwen-32B to act as reasoning outcome evaluators. See [subsection 2.1](https://arxiv.org/html/2503.19877v1#S2.SS1 "2.1 Reasoning Outcome Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for an overview of reasoning outcome evaluation. We also present the results for fine-tuned outcome evaluators, including Llama3-8B-CLoud-RM[[2](https://arxiv.org/html/2503.19877v1#bib.bib2)] and prometheus 2 families[[24](https://arxiv.org/html/2503.19877v1#bib.bib24)], in[Appendix A](https://arxiv.org/html/2503.19877v1#A1 "Appendix A Translating Improved Evaluation to Problem-solving (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"). Note that while these models do generate CoT, they have not undergone RL training for reasoning. Such models therefore produce shorter responses and lack the self-correction, backtracking etc. capabilities that reasoning models possess.
+* •Reasoning process evaluator: We prompt the reasoning models DeepSeek-R1-Distill-Qwen-
+
+7B and DeepSeek-R1-Distill-Qwen-32B to act as reasoning process evaluators. See [subsection 2.2](https://arxiv.org/html/2503.19877v1#S2.SS2 "2.2 Reasoning Process Evaluators ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for an overview of reasoning process evaluation. We also include the results of single-step reasoning process evaluators ([section 3](https://arxiv.org/html/2503.19877v1#S3 "3 Experiment 1: Evaluation-time Scaling Trends ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")) in [Appendix A](https://arxiv.org/html/2503.19877v1#A1 "Appendix A Translating Improved Evaluation to Problem-solving (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+* •Reasoning process + outcome evaluator: We combine reasoning process and reasoning outcome evaluation: see [subsection 2.3](https://arxiv.org/html/2503.19877v1#S2.SS3 "2.3 Combining outcome judgments and process judgments ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for an overview of this approach. To avoid overfitting to our evaluation benchmarks, we set α=0.5 𝛼 0.5\alpha=0.5 italic_α = 0.5, thereby giving equal weighting to the outcome and process components (see [Figure 6](https://arxiv.org/html/2503.19877v1#A4.F6 "Figure 6 ‣ D.1 Why does averaging process and outcome scores improve Best-of-𝑁 performance? ‣ Appendix D Further analyses on Best-of-𝑁 performance ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for the results of experiments where we vary α 𝛼\alpha italic_α). We experiment with the reasoning models QwQ-32B-Preview, DeepSeek-R1-Distill-Qwen-7B, and DeepSeek-R1-Distill-Qwen-32B, using the same model for both the outcome and process evaluation components in each case.
+
+
+
+Figure 3: We compare direct evaluators (using Best-of-64) against reasoning evaluators (using Best-of-8). Reasoning evaluators achieve superior performance compared to their direct counterparts with fewer candidate responses (left figure x 𝑥 x italic_x-axis) and with less inference compute (right figure x 𝑥 x italic_x-axis; [Appendix C](https://arxiv.org/html/2503.19877v1#A3 "Appendix C Approximation of test-time compute ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")). Combining outcome and process judgments yields further improvements.
+
+#### Matching evaluation-time budget across baselines
+
+We test direct outcome evaluators and direct process evaluators in the Best-of-64 setting. To account for the higher per-instance inference cost of reasoning evaluators, we evaluate them in the Best-of-8 setting instead, thereby ensuring a compute budget comparable to that of direct evaluators. We use Qwen2.5-72B-Instruct to segment the response into steps, yielding 10.07 steps per response on average across the 21 settings.
+
+### 4.2 Experimental results
+
+The results of our Best-of-N 𝑁 N italic_N experiments are presented in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") and [Figure 5](https://arxiv.org/html/2503.19877v1#A0.F5 "Figure 5 ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") (Appendix[A](https://arxiv.org/html/2503.19877v1#A1 "Appendix A Translating Improved Evaluation to Problem-solving (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")).
+
+#### Scaling evaluation-time compute is more efficient than generating more candidate responses
+
+From the top left plot in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we see that reasoning outcome evaluation, reasoning process evaluation and reasoning process + outcome evaluation with N=8 𝑁 8 N=8 italic_N = 8 (Best-of-8) all achieve comparable or higher scores than direct outcome evaluation and direct process evaluation with N=64 𝑁 64 N=64 italic_N = 64 (Best-of-64). From the top right plot in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we also observe that the 32B reasoning process + outcome evaluator (N=8 𝑁 8 N=8 italic_N = 8; black line) achieves higher performance than the 72B direct process evaluator (N=64 𝑁 64 N=64 italic_N = 64; light blue line) despite expending less evaluation-time compute (see[Appendix C](https://arxiv.org/html/2503.19877v1#A3 "Appendix C Approximation of test-time compute ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for more details). This suggests that allocating more of the evaluation-time compute budget to evaluation rather than generation can improve performance.
+
+We also observe from [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") that direct outcome and direct process evaluators readily suffer from reward model overoptimization[[8](https://arxiv.org/html/2503.19877v1#bib.bib8), [40](https://arxiv.org/html/2503.19877v1#bib.bib40)]. Although increasing N 𝑁 N italic_N generally raises the Best-of-N 𝑁 N italic_N performance upper bound, it also increases the likelihood of the imperfect evaluator encountering suboptimal responses that it nonetheless assigns high scores to. This phenomenon results in diminishing returns as N 𝑁 N italic_N increases, with score improvements progressively slowing, plateauing, or even decreasing, as can be seen across all plots in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"). By providing more precise assessments, reasoning outcome, process, and process + outcome evaluators enable strong Best-of-N 𝑁 N italic_N performance without requiring a substantial increase in N 𝑁 N italic_N, thereby mitigating the risk of reward model overoptimization.
+
+#### Combining scores from reasoning outcome and process evaluation can boost performance
+
+Comparing the score trends of reasoning outcome evaluators and reasoning process evaluators in the top right plot of [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we see that reasoning process evaluators achieve similar scores to reasoning outcome evaluators despite consuming more inference compute. This suggests that reasoning process evaluation alone is not always more effective than reasoning outcome evaluation, contrary to previous empirical results on direct evaluators[[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [51](https://arxiv.org/html/2503.19877v1#bib.bib51)]. Combining reasoning outcome and process judgments (i.e., reasoning process + outcome evaluation) however yields concrete performance gains compared to using either reasoning outcome evaluation or reasoning process evaluation in isolation. We hypothesize that the two evaluation approaches provide complementary signals that enable more accurate assessments. We investigate the mechanisms behind this in [section 5](https://arxiv.org/html/2503.19877v1#S5 "5 Discussion ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+
+#### Reasoning evaluators are more effective than direct evaluators for coding
+
+We observe that the score trends of direct process evaluators on LeetCode (bottom-row plots in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")) differ from the overall averaged score trends (top-row plots in [Figure 3](https://arxiv.org/html/2503.19877v1#S4.F3 "Figure 3 ‣ Methods ‣ 4.1 Experimental setting ‣ 4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")). Surprisingly, we find that direct process evaluators perform worse than direct outcome evaluators on LeetCode across different N 𝑁 N italic_N scales, contradicting prior findings that PRMs generally outperform ORMs[[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [51](https://arxiv.org/html/2503.19877v1#bib.bib51)]. We also find that direct process evaluation performance on LeetCode plateaus at a lower value of N 𝑁 N italic_N than in other tasks.
+
+We attribute these findings to two main causes. Firstly, direct process evaluators are only trained on math data (e.g., PRM800K[[30](https://arxiv.org/html/2503.19877v1#bib.bib30)]), making them less effective on out-of-domain tasks such as coding. Secondly, the heuristic-based splitting methods (e.g., splitting based on newline characters) typically adopted by direct process evaluators may be suboptimal for performing process evaluation on code outputs.
+
+#### Final Remark
+
+To summarize, we find that scaling evaluation-time compute is not only helpful for assessing LM-generated outputs ([section 3](https://arxiv.org/html/2503.19877v1#S3 "3 Experiment 1: Evaluation-time Scaling Trends ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators")), but is also a cost-effective method for improving the LMs’ problem-solving capabilities, especially in the code domain.
+
+5 Discussion
+------------
+
+### 5.1 Can reasoning models self-evaluate their own outputs?
+
+Table 2: Self-evaluation results: We prompt DeepSeek-R1-Distill-Qwen-7B to act as both a generator and its own Best-of-N 𝑁 N italic_N reasoning process + outcome evaluator (self-evaluation). We measure performance improvements (between Best-of-1 and Best-of-8) as a percentage of the gap between Best-of-1 and oracle performance, denoted as Gap Recovered. We find that the gains associated with this are comparable to or larger than the gains associated with Best-of-N 𝑁 N italic_N using the same evaluation strategy on the outputs of instruction-tuned generators.
+
+Generator N=1 𝑁 1 N=1 italic_N = 1 N=8 𝑁 8 N=8 italic_N = 8 Oracle Gap Recovered (%)
+AIME24
+Eurus-2-SFT 13.3 20.0 20.0 100.0
+Llama3.1-70B-Instruct 16.7 23.3 36.7 33.0
+Qwen2.5-7B-Instruct 10.0 16.7 23.3 50.4
+Self-Evaluation (Outcome Eval on CoT + Process Eval on summary)50.0 73.3 83.3 70.0
+Self-Evaluation (Outcome Eval on summary + Process Eval on summary)50.0 66.7 83.3 50.2
+AMC23
+Eurus-2-SFT 31.1 45.3 62.7 44.9
+Llama3.1-70B-Instruct 26.8 45.3 65.1 48.3
+Qwen2.5-7B-Instruct 36.6 51.6 69.9 45.0
+Self-Evaluation (Outcome Eval on CoT + Process Eval on summary)85.5 88.0 92.8 34.2
+Self-Evaluation (Outcome Eval on summary + Process Eval on summary)85.5 89.2 92.8 50.7
+
+In [section 4](https://arxiv.org/html/2503.19877v1#S4 "4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), we study whether using reasoning models as Best-of-N 𝑁 N italic_N evaluators improves the problem-solving capabilities of instruction-tuned models. This naturally raises the question of whether similar gains could be achieved if a reasoning model is used both as the generator and the evaluator (i.e., self-evaluation). To explore this, we conduct a preliminary experiment in which DeepSeek-R1-Distill-Qwen-7B is used both to generate responses and to evaluate them. Due to computational constraints, we only assess self-evaluation on AIME24 and AMC23 in the Best-of-8 setting.
+
+#### The challenge of assessing long CoTs
+
+A notable characteristic of current reasoning models is that their CoTs (bookended by “” tokens) are often lengthy and include numerous reasoning steps. This presents practical challenges for evaluation, as the evaluator must be able to handle long contexts and accurately assess text that includes exploratory reasoning, backtracking, and self-correction steps. However, rather than evaluating the entire CoT trace, we could instead evaluate the reasoning summary that is automatically produced by the reasoning model after the CoT. This reasoning summary condenses the exploratory CoT into a more concise form resembling the CoT of instruction-tuned models: see Appendix [F.2](https://arxiv.org/html/2503.19877v1#A6.SS2 "F.2 Example of reasoning model output ‣ Appendix F Can reasoning models self-evaluate their own outputs? (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for an illustrative example.
+
+#### Experimental setting
+
+We first generate responses to the AIME24 and AMC23 datasets with DeepSeek-R1-Distill-Qwen-7B, setting t=0.6 𝑡 0.6 t=0.6 italic_t = 0.6 and N=8 𝑁 8 N=8 italic_N = 8. We then perform Best-of-N 𝑁 N italic_N by prompting DeepSeek-R1-Distill-Qwen-7B to act as a reasoning outcome + process evaluator, following the method described in [subsection 2.3](https://arxiv.org/html/2503.19877v1#S2.SS3 "2.3 Combining outcome judgments and process judgments ‣ 2 Methodology ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"). In addition to reporting Best-of-1 1 1 1 and Best-of-8 8 8 8 performance, we also report the percentage of the performance gap between the Best-of-1 1 1 1 and oracle performances recovered by Best-of-8 8 8 8 (denoted as Gap Recovered). For self-evaluation, we always perform reasoning process evaluation on the output summaries, whereas we experiment with reasoning outcome evaluation on both the summaries and the entire CoT. We document our findings in Table [2](https://arxiv.org/html/2503.19877v1#S5.T2 "Table 2 ‣ 5.1 Can reasoning models self-evaluate their own outputs? ‣ 5 Discussion ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"). See Table [6](https://arxiv.org/html/2503.19877v1#A6.T6 "Table 6 ‣ F.1 Additional Results ‣ Appendix F Can reasoning models self-evaluate their own outputs? (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") for additional self-evaluation experimental results.
+
+#### Main Results
+
+Our results in Table [2](https://arxiv.org/html/2503.19877v1#S5.T2 "Table 2 ‣ 5.1 Can reasoning models self-evaluate their own outputs? ‣ 5 Discussion ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") provide preliminary evidence that reasoning models can be used to improve their own outputs through Best-of-N 𝑁 N italic_N. Specifically, the gains associated with this (Gap Recovered for DeepSeek-R1-Distill-Qwen-7B) is comparable to or larger than the gains associated with Best-of-N 𝑁 N italic_N on the outputs of the other generators (Eurus-2-SFT, Llama-3.1-70B-Instruct, and Qwen2.5-7B-Instruct) when using the same evaluator (DeepSeek-R1-Distill-Qwen-7B).
+
+#### Evaluating the summary is as effective as evaluating entire CoT
+
+We also find that performing the outcome evaluation component of our reasoning process + outcome evaluation strategy on thoughts improves over outcome evaluation on summaries for AIME, whereas the opposite is true for AMC, although both strategies achieve notable gains over Best-of-1 on both datasets. We hope that our results encourage further investigation into self-evaluation strategies for reasoning models.
+
+### 5.2 How are outcome evaluation and process evaluation different from each other?
+
+
+
+Figure 4: Reasoning process evaluators show high-precision and low-recall trends compared to reasoning outcome evaluators.
+
+Our results from Section [4](https://arxiv.org/html/2503.19877v1#S4 "4 Experiment 2: Translating Improved Evaluation to Problem-solving ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators") show that when using the same LM as the evaluator, reasoning process evaluators achieve a lower score than reasoning outcome evaluators in the Best-of-N 𝑁 N italic_N setting. This contradicts past empirical results demonstrating that process evaluation is superior to outcome evaluation[[30](https://arxiv.org/html/2503.19877v1#bib.bib30), [51](https://arxiv.org/html/2503.19877v1#bib.bib51)]. Furthermore, combining outcome and process judgments leads to notable gains, indicating that the two evaluation approaches provide complementary benefits.
+
+To further understand this, we analyze our results by framing the Best-of-N 𝑁 N italic_N setting as a binary classification task. Specifically, given all of the responses and their corresponding reasoning evaluator-generated scores, we divide the responses into two groups based on whether their score predictions exceed a certain threshold T 𝑇 T italic_T. We then measure precision and recall by comparing these classification predictions against the ground truth correctness of each response 4 4 4 A correct answer that is assigned a low score (0.99) scores to responses with incorrect answers. When using process scores together, such false positives can be effectively reranked and filtered as shown in [subsection D.1](https://arxiv.org/html/2503.19877v1#A4.SS1 "D.1 Why does averaging process and outcome scores improve Best-of-𝑁 performance? ‣ Appendix D Further analyses on Best-of-𝑁 performance ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"), leading to improved performance in Best-of-N 𝑁 N italic_N setting.
+
+Table 5: Examples of steps flagged as an error by reasoning process evaluators.
+
+Appendix E Qualitative analysis
+-------------------------------
+
+In this section, we briefly present the criteria for manual analysis on steps that lead to a correct answer but are predicted as incorrect by reasoning process evaluators.
+
+First, we randomly sample 90 responses from MATH500, OlympiadBench, and AMC generated using Llama-3.1-70B-Instruct, where the response’s final answer is correct but the reasoning process evaluator (DeepSeek-R1-Distill-7B) flag a step-level error. The three datasets were chosen because (1) they cover a diverse range of problems including relatively easier (MATH500), medium-level (AMC), and hardest problems (OlympiadBench)6 6 6 Despite that AIME is the hardest dataset, it only contains 30 problems, insufficient for the manual error analysis., and (2) these three datasets demonstrate the most significant gap between reasoning outcome evaluators and reasoning process evaluators in Best-of-N 𝑁 N italic_N setting.
+
+The authors manually analyzed the first erroneous step flagged by the reasoning process evaluators. The flagged steps are classified into errors and non-errors. Errors include clear logical or mathematical errors or unjustified falsifiable statements, whereas non-errors include correct reasoning steps, assumptions, and text unrelated to reasoning. The taxonomy of errors is displayed within Table [5](https://arxiv.org/html/2503.19877v1#A4.T5 "Table 5 ‣ D.2 How does problem difficulty affect the effectiveness of outcome versus process evaluation? ‣ Appendix D Further analyses on Best-of-𝑁 performance ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators").
+
+Appendix F Can reasoning models self-evaluate their own outputs? (extended)
+---------------------------------------------------------------------------
+
+### F.1 Additional Results
+
+Table 6: Best-of-N 𝑁 N italic_N performance with self-evaluation: We employ DeepSeek-R1-Distill-Qwen-7B as a reasoning outcome evaluator, reasoning process evaluator and reasoning process + outcome evaluator to assess outputs from diverse generators, including itself (denoted as self-evaluation). We also measure the performance improvements (between Best-of-1 and Best-of-8) as a percentage of the gap between Best-of-1 and oracle performance, denoted as Gap Recovered.
+
+Generator N=1 𝑁 1 N=1 italic_N = 1 N=8 𝑁 8 N=8 italic_N = 8 Oracle Gap Recovered (%)
+AIME24
+Eurus-2-SFT 13.3 20.0 20.0 100.0
+Llama3.1-70B-Instruct 16.7 23.3 36.7 33.0
+Qwen2.5-7B-Instruct 10.0 16.7 23.3 50.4
+Self-Evaluation (Outcome Eval on CoT)50.0 60.0 83.3 30.0
+Self-Evaluation (Outcome Eval on summary)50.0 56.7 83.3 20.1
+Self-Evaluation (Process Eval on summary)50.0 70.0 83.3 60.0
+Self-Evaluation (Outcome Eval on CoT + Process Eval on summary)50.0 73.3 83.3 70.0
+Self-Evaluation (Outcome Eval on summary + Process Eval on summary)50.0 66.7 83.3 50.2
+AMC23
+Eurus-2-SFT 31.1 45.3 62.7 44.9
+Llama3.1-70B-Instruct 26.8 45.3 65.1 48.3
+Qwen2.5-7B-Instruct 36.6 51.6 69.9 45.0
+Self-Evaluation (Outcome Eval on CoT)85.5 85.5 92.8 0.0
+Self-Evaluation (Outcome Eval on summary)85.5 85.5 92.8 0.0
+Self-Evaluation (Process Eval on summary)85.5 86.8 92.8 17.8
+Self-Evaluation (Outcome Eval on CoT + Process Eval on summary)85.5 88.0 92.8 34.2
+Self-Evaluation (Outcome Eval on summary + Process Eval on summary)85.5 89.2 92.8 50.7
+
+We also consider self-evaluation with DeepSeek-R1-Distill-Qwen-7B using reasoning process evaluation and reasoning outcome evaluation in addition to process + outcome evaluation, and document our findings in Table [6](https://arxiv.org/html/2503.19877v1#A6.T6 "Table 6 ‣ F.1 Additional Results ‣ Appendix F Can reasoning models self-evaluate their own outputs? (extended) ‣ Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"). We find that process evaluation outperforms outcome evaluation on both full CoTs and summaries, although process + outcome evaluation outperforms process-only evaluation.
+
+### F.2 Example of reasoning model output
+
+We provide an example of DeepSeek-R1-Distill-Qwen-7B’s response to an input from AMC23. The pink box contains the chain-of-thought portion of the response, while the green box contains the summary. The CoT and the summary are separated by a “” token.
+
+Problem. In the state of Coinland, coins have values 6,10,6 10 6,10,6 , 10 , and 15 15 15 15 cents. Suppose x 𝑥 x italic_x is the value in cents of the most expensive item in Coinland that cannot be purchased using these coins with exact change. What is the sum of the digits of x?𝑥?x?italic_x ?Response
+
+Appendix G Prompts for Reasoning evaluators
+-------------------------------------------
+
+We provide the prompts we use to use reasoning models as process evaluators and outcome evaluators:
+
+Reasoning process evaluator prompt:The following is a math problem and a solution (split into paragraphs, enclosed with tags and indexed from 0):Problem{problem}Previous Paragraph(s){previous_paragraphs}Current Paragraph{current_paragraph}Instructions Your task is to decide whether the current paragraph is correct or not. If the current paragraph is correct, return the index of 1 and if not, return the index of 0.Don’t try to solve the problem. Your task is only to critique the current paragraph.Please put your final prediction (i.e., the correctness, which must be 0 or 1) in boxed{{}}. Every output must therefore contain either 1 1 1 or 0 0.You should only consider the logical correctness of the current paragraph, not whether it is useful or has the potential to lead to the correct answer.
+
+Reasoning outcome evaluator prompt:The following is a math problem and a solution (split into paragraphs, enclosed with tags and indexed from 0):Problem{problem}Response{response}Instructions Your task is to decide whether the solution is correct or not. If the solution is correct, return the index of 1 and if not, return the index of 0.Don’t try to solve the problem. Your task is only to critique the solution.Please put your final answer (i.e., the index, which must be 0 or 1) in boxed{{}}. Every output must therefore contain either 1 1 1 or 0 0.