diff --git "a/ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/full.md" "b/ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/full.md" new file mode 100644--- /dev/null +++ "b/ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/full.md" @@ -0,0 +1,1283 @@ +# RETHINKING REWARD MODELING IN PREFERENCE-BASED LARGE LANGUAGE MODEL ALIGNMENT + +Hao Sun*, Yunyi Shen*, Jean-Francois Ton + +University of Cambridge, Massachusetts Institute of Technology, ByteDance Research + +# ABSTRACT + +The Bradley-Terry (BT) model is a common and successful practice in reward modeling for Large Language Model (LLM) alignment. However, it remains unclear why this model — originally developed for multi-player stochastic game matching — can be adopted to convert pairwise response comparisons to reward values and make predictions. Especially given the fact that only a limited number of prompt-response pairs are sparsely compared with others. In this paper, we first establish the convergence rate of BT reward models based on deep neural networks using embeddings, providing a theoretical foundation for their use. Despite theoretically sound, we argue that the BT model is not a necessary choice from the perspective of downstream optimization, this is because a reward model only needs to preserve the correct ranking predictions through a monotonic transformation of the true reward. We highlight the critical concept of order consistency in reward modeling, propose a simple and straightforward upper-bound algorithm, compatible with off-the-shelf binary classifiers, as an alternative order-consistent reward modeling objective. To offer practical insights, we empirically evaluate the performance of these different reward modeling approaches across more than 12,000 experimental setups, using 6 base LLMs, 2 datasets, and diverse annotation designs that vary in quantity, quality, and pairing choices in preference annotations. [Code & Data] [A full version is available at this link] + +# 1 INTRODUCTION + +The alignment of Large Language Models (LLMs) is crucial for their safe and effective deployment across various applications. Current research on reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Stiannon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a) has largely focused on utilizing preference-based annotations provided by humans or general-purpose LLMs Bai et al. (2022b); Lee et al. (2023); Guo et al. (2024). In general, there are two primary approaches to RLHF, namely the direct policy optimization (Rafailov et al., 2024; Zhao et al., 2023; Azar et al., 2023) that aligns LLMs with supervised learning objectives, and the alternate method that constructs a reward model to guide the LLM optimization via either supervised learning or reinforcement learning (Sun, 2023; Yuan et al., 2023; Munos et al., 2023; Li et al., 2023). + +Among these strategies, the Bradley-Terry (BT) model (Bradley & Terry, 1952) is commonly employed to convert pairwise comparisons into scores and has demonstrated its success in large-scale alignment systems (Ouyang et al., 2022; Touvron et al., 2023). Despite its empirical success, the theoretical justification for using the BT model in this context remains underexplored, particularly when dealing with sparse comparisons on a limited number of prompt-response pairs. Furthermore, it is unclear how necessary it is to use the BT models, and what data format is preferred in annotation. Our work aims to address these key questions by reflecting on reward modeling in LLM alignment. The subsequent sections of this paper are structured to answer those questions: + +- Question 1: When the number of players is greater than the number of comparisons (as often the case in LLM alignment settings), is the use of the BT model theoretically sound, and what factors contribute to its success in practice? + +In Section 2 we first review the usage of BT from a statistics perspective. We show that the + +classical BT model can be seamlessly applied to the LLM arena cases, but falls short in reward modeling due to the extremely sparse comparison and the requirement of making predictions. We explore the regression variants of BT models (e.g., Springall, 1973; Bockenholt, 1988) and their neural network extensions, providing theoretical results on their ability to approximate true reward up to an additive constant in the context of LLM alignment. + +- Question 2: What are alternative approaches to reward modeling other than the BT model? + +Since the primary objective of reward modeling is to optimize LLMs outputs by identifying the good response in inferences, learning a reward function up to any monotonic transformation should be sufficient for such an objective. This fact motivates us to propose a simple objective based on order consistency in Section 3. We show both the BT model and Classification models fall in this objective class. Corresponding empirical studies are conducted in Section 4. + +- Question 3: The conventional applications of the BT model assume randomized pairwise comparisons (e.g., randomized game matches among players). Would cross-prompt comparisons lead to more effective reward modeling? + +Our theoretical analysis emphasizes the necessity of using regression on the embedding space to predict rewards, which hints at the possibility of different types of comparisons. Specifically, we find no inherent advantages in restricting comparisons to responses from the same prompt, as is commonly done in the literature. In Section 4, we empirically investigate the impact of cross-prompt comparisons in LLM alignment and provide insights into their potential benefits. + +We defer related work discussions to Appendix A. To summarize our contributions: + +1. Formally, we provide a comprehensive analysis of the application of the BT model in LLM alignment, contrasting its traditional use in multi-player arenas with the unique challenges posed in this context. We analyze the underlying rationale and offer a thorough justification for applying the BT model to LLM reward modeling. +2. Theoretically, we introduce the first asymptotic theory for neural network-based BT regression in preference-based reward modeling. Our work establishes the first risk bound for BT model reward estimation in the context of LLM alignment. +3. Practically, we propose order consistency as a core objective of reward modeling, demonstrating how this principle can derive both the BT model and an alternative classification-based approach. This alternative offers greater flexibility compared to the BT model, broadening its applicability. +4. Empirically, we conduct extensive experiments — covering 6 base LLMs, 2 datasets, 3 response sampling methods, 6 annotation noise levels, 3 reward model implementations, 4 annotation availability scenarios, and 5 random seeds — resulting in over 12,000 runs. These experiments demonstrate the statistical efficacy of the classification-based reward models and compare them with the BT model across diverse settings. + +# 2 RETHINKING THE USAGE OF BT MODELS IN LLM ALIGNMENT + +# 2.1 TWO DIFFERENT BT MODELS — PARAMETER ESTIMATION AND PREDICTION + +There are at least two distinct applications of the BT model in LLM settings: (1) LLM chatbot arenas (Chiang et al., 2024) and (2) reward modeling. While both utilize pairwise comparisons, their objectives and the volume of comparisons differ significantly. In the chatbot arena, multiple LLMs compete against one another based on human feedback through paired comparisons. Here, each LLM functions as a player, and the human-annotated preferences represent the outcome of these matches. The goal is to assign a single performance score to each LLM player. In Chiang et al. (2024) 130 models were compared across more than 1.7 million comparisons, with each model participating in over 26,000 matches on average. + +In contrast, reward modeling seeks to assign a single reward value for each prompt-response pair, with the added challenge of predicting rewards for unseen pairs. Typically, any given prompt-response is compared to another one only once, resulting in far fewer comparisons than in the arena setting. To understand how a single model would handle these seemingly different tasks, it is beneficial to have an overview of the BT model and their variants. + +The original BT model (Bradley & Terry, 1952), aligned with the Luce-Shephard choice rule (Luce, 1959; Shepard, 1957), posits that in a simplified two-option scenario, the probability of selecting + +option $i$ from a set $i,j$ is proportional to the utility $u(\cdot)$ assigned to that option. Formally, this can be expressed as a softmax output of the log utilities $r(\cdot)$ (Bockenholt, 1988) + +$$ +P (i \succ j) = \frac {u (i)}{u (i) + u (j)} = \frac {\exp (r (i))}{\exp (r (i)) + \exp (r (j))} = \operatorname {s o f t m a x} (r (i), r (j)). \tag {1} +$$ + +In the LLM arena setting, estimating the values of $r(\cdot)$ is sufficient to achieve the primary goal of evaluating each chatbot's performance. This aligns closely with the original objective of BT model in ranking sports teams (Bradley & Terry, 1952). Previous work has shown that, with enough pairwise competition, one can estimate these ability scores well (Ford Jr, 1957; Han et al., 2020; Wu et al., 2022) up to a constant additive factor. It is shown that to estimate $N$ scores via random pairwise comparisons, the theoretical lower bound on the number of comparisons is $O(N\log (N))^{1}$ , while the best-known methods require $O(N\log^3 (N))$ comparisons (Han et al., 2020). + +In contrast, the application of the BT model to reward modeling is not as straightforward. First, the implications of using the BT model in this context are not well-defined in the literature. For instance, if each prompt-response pair is treated as a player, how do we characterize the stochastic nature of human annotations as the game results? What assumptions need to be made? The answers to these questions were not elaborated in the literature (Christiano et al., 2017; Stiannon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a). In Appendix B, we provide an analysis of the underlying assumptions of applying the BT model to preference-based annotation as a game. + +Additionally, estimating a separate $r(\cdot)$ for each prompt-response pair is impractical. In typical LLM alignment scenarios, we often have only $N/2$ comparisons for $N$ pairs, far below the theoretical lower bound for consistent estimation (Han et al., 2020). Furthermore, unlike the arena setting, there is no clear way to predict the score for a new, unseen pair. However, this challenge is not unique to LLM alignment; sports analysts also need to estimate a team's ability before many competitions or predict the ability of a new team. A common approach in such cases is to use features or covariates, such as team composition or funding status, to predict scores. For LLM, one could have sentence embeddings as such covariates. + +These extensions of the BT model were explored shortly after its original introduction: Springall (1973) assumed that $r(\cdot)$ could be expressed as a linear combination of covariates. In this scenario, the problem reduces to a classic logistic regression on the difference between two sets of covariates. This allows us to predict the score for a new team or prompt-response pair based on its covariates before any comparisons are made. More complex nonlinear models such as spline models have also been explored (De Soete & Winsberg, 1993). For a more comprehensive review from a statistical perspective, we refer readers to Cattelan (2012). + +In practice, reward modeling in LLM alignment often employs neural networks, with multilayer perceptrons (MLPs) being a common choice to map embeddings to scores. However, there is currently no theoretical justification for why this particular choice of model and loss function is effective for learning reward functions. From a theoretical standpoint, this model variant can be viewed as a nonparametric logistic regression problem (Bockenholt, 1988; Schmidt-Hieber, 2020) with additional structural assumptions on the network, and our analysis builds on this framework. In the following section, we establish the asymptotic theory for learning reward models using MLPs and BT loss. + +# 2.2 ASYMPTOTIC THEORY ON MLP-BASED BT REGRESSION IN REWARD MODELING + +In preference-based LLM alignment, we work with the dataset under the form of $\mathcal{D}_{\mathrm{pref}} = \{(x_i,y_{1,i},y_{2,i},h_i)\}_{i\in [n]}$ , where each tuple consists of the prompt $x_{i}$ , the corresponding responses $y_{1,i}$ and $y_{2,i}$ sampled from the LLM $\ell$ to be aligned $y_{1,i},y_{2,i}\sim \ell (x_i)$ , and the human-annotated preference $h_i$ , being 1 if $y_{1,i}$ is preferred and $-1$ otherwise. + +Assume we have a known embedding function $\Psi(\cdot, \cdot): \mathcal{X} \times \mathcal{Y} \mapsto [0,1]^d$ , such that there exists an unknown reward function $r: \mathbb{R}^d \mapsto \mathbb{R}$ , and can be expressed as $r(\Psi(x,y))$ for all $x,y$ . Without loss of generality, we assume the embeddings are scaled within the range $[0,1]$ — otherwise, we can scale the embeddings into this range. Under this framework, reward modeling reduces to learning the function $r$ . Note that under this formalism there is no need for a comparison to have the same + +prompt. We will empirically explore the effects of using cross-prompt and same-prompt in our experiment section. + +Denote our reward model as $\hat{r}_{\theta}$ , parameterized by $\theta$ , when there is no confusion, we will abbreviate it as $\hat{r}$ . We denote the vector of two rewards as $\hat{\boldsymbol{r}}$ and the class probability is then $\mathrm{softmax}(\hat{\boldsymbol{r}})$ . Thus, training this reward model reduces to training a classifier with a cross-entropy loss, where the predicted conditional class probabilities are computed as $\mathrm{softmax}(\hat{r} (\Psi (x_1,y_1)),\hat{r} (\Psi (x_2,y_2)))$ . Our theoretical analysis follows closely the work of Bos & Schmidt-Hieber (2022) and Schmidt-Hieber (2020). In this setting, we consider a special case of a model that preserves anti-symmetry: if we exchange the roles of $x_{1},y_{1}$ with $x_{2},y_{2}$ , the reward difference becomes negative. + +We demonstrate that an MLP can approximate the probability that the first pair is preferred over the second, and subsequently show that this approach enables the model to learn the underlying reward function effectively. For notational simplicity, let $\Psi_1^{(i)}$ and $\Psi_2^{(i)}$ represent the embeddings of the $i$ -th pair, where $i = 1,\ldots ,n$ . Without loss of generality, we assume $\Psi_{-}^{(i)}\in [0,1]^{d}$ . Let the true preference probability for the $i$ -th pair be $\pmb{p}_0^{(i)}$ , and the predicted probability be $\hat{\pmb{p}}^{(i)} = (\sigma (\hat{r}^{\Delta}(\Psi_{1}^{(i)},\Psi_{2}^{(i)})),1 - \sigma (\hat{r}^{\Delta}(\Psi_{1}^{(i)},\Psi_{2}^{(i)}))) = \mathrm{softmax}(\hat{r} (\Psi_{1}^{(i)},\Psi_{2}^{(i)}))$ . The preference label $\pmb{h}^{(i)}$ equals $(1,0)$ if the first response pair is preferred, and $(0,1)$ otherwise. In this way, the BT model can be reduced to a pairwise classification problem, where the likelihood is given by: + +$$ +\widetilde {\mathcal {L}} _ {\mathrm {C E}} (\boldsymbol {p}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\boldsymbol {h} ^ {(i)}\right) ^ {\top} \log \left(\boldsymbol {p} ^ {(i)}\right), \quad \hat {\boldsymbol {p}} = \underset {\boldsymbol {p} \in \mathcal {F} _ {\theta}} {\arg \min } \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\boldsymbol {p}) \tag {2} +$$ + +It is unrealistic to assume we can find an NN that actually attends the global minimum, we denote the difference between the fitted NN and the global minimum as + +$$ +\Delta_ {n} \left(\boldsymbol {p} _ {0}, \hat {\boldsymbol {p}}\right) = \mathbb {E} \left[ \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\hat {\boldsymbol {p}}) - \min _ {\boldsymbol {p} \in \mathcal {F} _ {\theta}} \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\boldsymbol {p}) \right] \tag {3} +$$ + +We consider the truncated KL risk following Bos & Schmidt-Hieber (2022) to overcome the divergence problem of KL risk. + +Definition 2.1 (Truncated KL risk (Bos & Schmidt-Hieber, 2022)). The $B$ -truncated KL risk for a probability estimator $\hat{p}$ + +$$ +R _ {B} \left(\boldsymbol {p} _ {0}, \hat {\boldsymbol {p}}\right) = \mathbb {E} \left[ \boldsymbol {p} _ {0} ^ {\top} \min \left(B, \log \frac {\boldsymbol {p} _ {0}}{\hat {\boldsymbol {p}}}\right) \right] \tag {4} +$$ + +Our main theorem establishes that, with regularity assumptions on the true reward function, an MLP reward model can accurately predict preference probabilities, as measured by truncated KL risk. + +Theorem 2.2 (Truncated KL risk bound, informal). Suppose the true utility function induced probability of preference satisfies smoothness and regularity assumptions detailed in Assumption C.5 with smoothness characterised by constant $\beta$ and regularity characterised by constant $\alpha$ with dimension of embedding being $d$ . Let $\hat{\pmb{p}}$ be an estimator from the family of MLP satisfying regularity assumptions detailed in Assumption C.2 with depth $L$ . Define $\phi_{n} := 2^{\frac{(1 + \alpha)\beta + (3 + \alpha)d}{(1 + \alpha)\beta + d}} n^{-\frac{(1 + \alpha)\beta}{(1 + \alpha)\beta + d}}$ For sufficiently large $n$ , there exists constants $C', C''$ such that when $\Delta_{n}(\hat{p}, p_{0}) \leq C'' B \phi_{n} L \log^{2}(n)$ then + +$$ +R _ {B} \left(\boldsymbol {\rho} _ {0}, \hat {\boldsymbol {\rho}}\right) \leq C ^ {\prime} B \phi_ {n} L \log^ {2} (n) \tag {5} +$$ + +where $a \lesssim b$ means there exists some constant $C$ s.t. $a \leq Cb$ and $a \asymp b$ means $a \lesssim b$ and $b \lesssim a$ . + +A detailed formal statement and proof are given as Theorem C.11 in the appendix. + +Corollary 2.3 (Connecting probability to reward). Given that $(\sqrt{a} - \sqrt{b})^2 = (a - b)^2 / (\sqrt{a} + \sqrt{b})^2$ , we can apply Lemma C.12 to demonstrate that in a large subset of the embedding space, + +$$ +\left| p _ {0} \left(\Psi_ {1}, \Psi_ {2}\right) - \hat {p} \left(\Psi_ {1}, \Psi_ {2}\right) \right| \lesssim \left| \sqrt {p _ {0}} + \sqrt {\hat {p}} \right| \sqrt {\phi_ {n} L} \log (n) \tag {6} +$$ + +$$ +\left| r \left(\Psi_ {1}\right) - r \left(\Psi_ {2}\right) - \left(\hat {r} \left(\Psi_ {1}\right) - \hat {r} \left(\Psi_ {2}\right)\right) \right| \lesssim \frac {\left| \sqrt {p _ {0}} + \sqrt {\hat {p}} \right|}{\tilde {p} (1 - \tilde {p})} \sqrt {\phi_ {n} L} \log (n) \tag {7} +$$ + +where $\tilde{p}$ is a probability between $p_0$ and $\hat{p}$ , the second line is due to mean value theorem. This indicates that the comparison should be between those pairs that are relatively close in reward to avoid diverging behavior of the logit function. + +Formal proofs and detailed theoretical analyses are provided in Appendix C. + +# 3 RETHINKING REWARD MODELING OBJECTIVES IN LLM ALIGNMENT + +Practical implementation of the BT model poses several requirements including the paired data, the specially designed anti-symmetric model structure, and the inherent assumptions of the BT model itself. This leads us to question whether we can have alternative approaches to reward modeling. To address this, it is helpful to pause and reflect on the essential requirements of a reward model in LLM alignment. Our data consists of binary preferences, representing the relative ranking between two prompt-response pairs. We can view this as a form of binary classification, with the ultimate goal being to learn a continuous score for optimization. While the Bradley-Terry model serves this purpose through its log-likelihood loss and anti-symmetric structure, we now consider whether these formal elements are indispensable and explore possible simplifications. + +# 3.1 THE UNIFIED TARGET OF ORDER CONSISTENCY + +In basic binary classification, we prioritize accuracy over modeling output probabilities precisely. For example, neural classifiers, despite being overconfident (Guo et al., 2017), are widely used for their accuracy. Similarly, we don't require the BT model to predict comparison probabilities accurately, but rather to provide a reliable signal for ranking LLM outputs at inference. + +Since our goal is response optimization using a reward proxy, it is sufficient to learn the reward function up to a monotonic transformation. While this might alter preference probabilities, it won't affect optimization results. To this end, the learned reward function $\hat{r}$ only needs to satisfy the following condition: for any two distinct prompt-response pairs $(x_{1},y_{1})$ and $(x_{2},y_{2})$ , we require that $(\hat{r}(x_{1},y_{1}) - \hat{r}(x_{2},y_{2}))(r(x_{1},y_{1}) - r(x_{2},y_{2})) > 0$ . In other words, the learned reward function must preserve the ordering as the true reward function. + +This condition implies the existence of a strictly monotonic increasing function $h$ such that $\hat{r}(\cdot) = h(r(\cdot))$ . Such an equivalence is sufficient for optimizing the reward in settings such as sampling-based optimization, and contextual bandits (Agarwal et al., 2014; Lattimore & Szepesvári, 2020). Ideally, if we have access to the ground truth ordering, we can define $h = \mathrm{sign}(r(x_1, y_1) - r(x_2, y_2))$ . If we can (1) construct a model $\hat{H}: \mathcal{X} \times \mathcal{Y} \times \mathcal{X} \times \mathcal{Y} \mapsto \{+1, -1\}$ that predicts the correct ordering with high accuracy (i.e., $\hat{H}$ is order consistent), and (2) map this ordering into a continuous value, then we can meet the requirements for downstream optimization. + +However, this observed ordering is often subject to noise from human annotators. Drawing on insights from the psychological bottleneck literature (Stewart et al., 2005; Guest et al., 2016), it is reasonable to assume that when the true underlying scores of two responses are similar, it becomes more difficult for annotators to distinguish between them. Formally, we have + +Assumption 3.1 (Imperfect Preference Annotation in Approximating True Scores). Denote the true utility difference $\Delta r\coloneqq |r(x_1,y_1) - r(x_2,y_2)|$ , and the annotator function $h(x_{1},x_{2},y_{1},y_{2})$ provides feedback that probabilistically aligns with the oracle utility $r(x,y)$ . We will assume it is harder for them to assign correctly when the reward difference between two pairs is $\Delta r$ according to: + +$$ +\mathbb {P} \left(h \left(x _ {1}, x _ {2}, y _ {1}, y _ {2}\right) \left(r \left(x _ {1}, y _ {1}\right) - r \left(x _ {2}, y _ {2}\right)\right) > 0 \mid \Delta r\right) = \xi (\Delta r), \tag {8} +$$ + +where $r$ is the unknown oracle utility function, and $\xi (\cdot)$ any monotonic increasing function to [0.5, 1]. + +With those noisy annotations, the best we can consider is an order consistent with the noisy ordering: + +Definition 3.2 (Order Consistency). We consider the loss over an ordering model $\hat{H}$ + +$$ +\mathcal {L} _ {\mathrm {o c}} (\hat {r}) = \mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2}, h} \mathbb {1} [ h = \hat {H} ] \tag {9} +$$ + +That is, the probability that a reward model ordering agrees with annotation. + +We show with the following proposition that minimizing this loss would help us achieve order consistency with true reward function: + +Proposition 3.3 (Lower bound on population level order consistency). Suppose a learned model $\hat{H}$ achieves objective equation 9 up to $1 - \delta \epsilon$ error for some small $0 < \delta < 1$ and $\epsilon < 3/20$ , i.e., + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2}, h} \mathbb {1} [ h = \hat {H} ] \geq 1 - \delta \epsilon \tag {10} +$$ + +Then, with probability at least $1 - \delta$ over $\Delta r$ , for any given $\Delta r$ the order consistency of $\hat{r}$ with respect to the oracle utility is bounded below by: + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2}} \left[ \mathbb {1} \left(\hat {H} \cdot [ r (x _ {1}, y _ {1}) - r (x _ {2}, y _ {2}) ] \geq 0\right) | \Delta r \right] \geq (1 - \epsilon) \cdot \xi^ {2} (\Delta r) + \epsilon \cdot (1 - \xi (\Delta r)) ^ {2} \tag {11} +$$ + +Further if we assume that with probability at least $1 - \kappa$ , that $\xi(\Delta r) \geq \sqrt{\epsilon^2 + 1 - 3\epsilon} + \epsilon$ , we have + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2} \sim \ell (x)} \left[ \mathbb {1} \left(\hat {H} \cdot [ r (x _ {1}, y _ {1}) - r (x _ {2}, y _ {2}) ] > 0\right) \right] \geq 1 - 4 \epsilon - \kappa - \delta \tag {12} +$$ + +In classical classification problems, accuracy is often optimized indirectly using losses like cross-entropy. This hints us to apply techniques for improving prediction accuracy to improve reward modeling. While the BT model uses cross-entropy loss and an antisymmetric structure, in the following, we show an alternative choice that could lead to a simple classification-based algorithm. + +# 3.2 THE BT MODEL AS A CHOICE + +The BT model is designed to enforce order consistency in the following way: it models the probability that $h = 1$ using $\sigma(\hat{r}_{\mathrm{BT}}(x_1, y_1) - \hat{r}_{\mathrm{BT}}(x_2, y_2))$ , where $\sigma$ is the sigmoid function. This allows training with a binary cross-entropy loss: + +$$ +\mathcal {L} _ {\mathrm {B T}} = \mathbb {E} \left[ \mathbb {1} _ {h = 1} \sigma \left(\hat {r} _ {\mathrm {B T}} \left(x _ {1}, y _ {1}\right) - \hat {r} _ {\mathrm {B T}} \left(x _ {2}, y _ {2}\right)\right) + \mathbb {1} _ {h = - 1} \left(1 - \sigma \left(\hat {r} _ {\mathrm {B T}} \left(x _ {1}, y _ {1}\right) - \hat {r} _ {\mathrm {B T}} \left(x _ {2}, y _ {2}\right)\right)\right) \right] \tag {13} +$$ + +This structure guarantees that flipping the comparison order will also flip the prediction. + +# 3.3 RELAXING THE ANTI-SYMMETRY CONSTRAINT: A CLASSIFICATION-BASED METHOD + +BT's difference-in-reward structure inherently enforces antisymmetry. To better understand this, consider an order model $\hat{H}$ that outputs a preference vector for both prompt-response pairs, i.e., $\hat{H} := (\hat{H}_1, \hat{H}_2)$ , where $\hat{H}_1, \hat{H}_2 : \mathcal{X} \times \mathcal{Y} \mapsto \{1, -1\}$ and ideally align with $(h, -h)$ . The BT model imposes a hard constraint such that $\hat{H}_1 = -\hat{H}_2$ . With sufficient data, instead of explicitly enforcing this constraint, the structure could be learned implicitly by ensuring order consistency, i.e., $\hat{H}_1 \approx h$ and $\hat{H}_2 \approx -h$ . Consider a single model e.g., a neural network or tree $\hat{H}_{\mathrm{clf}}$ . Under this construction, the order consistency could be written as $\mathcal{L}_{\mathrm{oc}} := \mathbb{E}(h = \hat{H}_{\mathrm{clf}}(x_1, y_1) \wedge -h = \hat{H}_{\mathrm{clf}}(x_2, y_2))$ , we could train a model targeting on predicting these two parts separately, and if the model predicts well, it should satisfy the antisymmetry constraint. A union bound of order consistency is + +$$ +\mathcal {L} _ {\mathrm {o c}} \leq \mathcal {L} _ {\mathrm {c l f}} := \mathbb {E} (h = \hat {H} _ {\mathrm {c l f}} (x _ {1}, y _ {1})) + \mathbb {E} (- h = \hat {H} _ {\mathrm {c l f}} (x _ {2}, y _ {2})) \tag {14} +$$ + +instead of directly enforcing order consistency, we can use the classification accuracy of each prompt-response pair as a surrogate. In practice, this means training a classifier by treating the annotations and prompt-response pairs independently. Then the logit can be used as a proxy for the reward model. For an alternative perspective: instead of learning the joint probability $\mathbb{P}(i\succ j)$ that depends on both players $i$ and $j$ , we focus on learning the marginal probability $\mathbb{P}(i$ wins). These two are related via Jensen's inequality, with further details provided in Proposition D.1. + +In both BT and classification-based reward modeling, there is no theoretical requirement to limit comparisons to the same prompts. For classification models, this is intuitive, as they do not rely on paired data. Similarly, in traditional BT applications, random pairwise comparisons among players are common. This further motivates our investigation into how random comparisons across different prompts affect reward modeling performance. + +# 4 EXPERIMENTS + +Objectives of Experiments In this section, we present empirical results to validate our key insights and methodological contributions. We begin by elaborating on our high-level design motivations for experiments. We aim to address the following questions: + +1. How effective are different learning objectives given by the order consistency framework? Specifically, how does the performance of a classification-based model compare to the widely used BT model in reward modeling? (Section 4.1) +2. How do various reward modeling methods perform as annotation quality and availability changes? (Section 4.2) +3. How good is the cross-prompt comparison in annotations? What is essential in determining whether this annotation design is beneficial or not in practice when it is available? (Section 4.3) + +Key Design Principles: Reproducibility, Controllability, and Computational Efficiency Our experiments are designed with three key desiderata in mind: high reproducibility, controllability to enable diverse ablation studies and computational efficiency. We prioritized these principles in all aspects of our experimental setup: + +- Base Models. We conducted experiments using three open-source LLMs at different scales: Gemma2b, Gemma7b, and LLaMA3-8b (Team et al., 2024; Meta, 2024), alongside their SFT-ed versions (Gemma2b-SFT, Gemma7b-SFT, LLaMA3-8b-SFT), following the setup in Stiennon et al. (2020). Details are provided in Appendix E. +- Annotations. To ensure controllability in generation and annotation, and closely simulate human annotation processes, we followed approaches from Gao et al. (2023); Liu et al. (2023); Tran & Chris Glaze (2024); Dong et al. (2024) to utilize open-source golden reward models as annotators to maintain affordability for the community. +- Datasets. We used the Anthropic-Harmless and Anthropic-Helpful datasets (Bai et al., 2022a), as these are extensively studied in the context of reward modeling, and open-source golden reward models are available (Yang et al., 2024b; Dong et al., 2023; 2024). +- Evaluation of Learned Reward Models. We evaluated the effectiveness of different reward models using Best-of-N (BoN) sampling following Gao et al. (2023). This choice was driven by three key considerations: + +(1) Performance: Empirical studies show BoN achieves better performance than PPO (Dong et al., 2023; Yuan et al., 2023; Gao et al., 2023; Coste et al., 2023). +(2) Stability and Reduced Engineering Overhead: BoN requires no hyperparameter tuning and is more stable than PPO, leading to more consistent and interpretable results (Ivison et al., 2024; Xu et al., 2024). +(3) Computational Efficiency and Reproducibility: BoN's reusability across N generations during test time makes it more computationally efficient compared to policy-gradient optimizations (Li et al., 2023; Stiannon et al., 2020). In contrast, using PPO (Schulman et al., 2017) for our 12,000 experimental setups would be computationally prohibitive since each setup requires distinct LLM fine-tuning. + +For more details on experiment setup, please refer to Appendix E. + +# 4.1 COMPARING THE BRADLEY-TERRY AND CLASSIFICATION OBJECTIVES + +Experiment Setups. In this section, we compare reward models trained with different learning objectives — the BT model and the Classification model — as well as their different implementations. For the BT model, the Siamese structure (Bromley et al., 1993) is required, making MLP the only viable backbone implementation (denoted in the plots as BT-MLP). For a fair and clear comparison to BT and to isolate the source of gains from implementation, we implement the classification model with both MLP (denoted in the plots as CLF-MLP) and the LightGBM (Ke et al., 2017) (denoted in the plots as CLF-LGB) given its wide success in machine learning applications (Bentéjac et al., 2021) and especially its successful application in embedding-based reward modeling (Sun et al., 2023). We evaluate those reward models using BoN with $N = 500$ , reporting improvements over base models to provide a clear comparison of their performance. + +![](images/4e604f593c40dbd3677cabc50d1741130b9b981d93a89822694f6d525e3d4b1a.jpg) +Figure 1: Comparison between BT and Classification (CLF) reward models. In general, the classification reward models achieve better performance than the BT reward models, with the added flexibility of using off-the-shelf classifiers beyond MLPs. Error bars are given by 5 runs with different seeds. + +![](images/b7f09e65df294fb8aec3dcff8c6caf18a97686894405795b6711369bc86d7b1b.jpg) + +Results and Takeaways. Figure 1 presents the results on both datasets. The x-axis shows different base models, and the y-axis shows the improvement on golden reward values on test prompts using BoN (i.e., the relative performance gains achieved through the reward models). The results indicate that classification-based reward models not only perform better than the BT reward models but also offer greater flexibility by allowing the use of diverse off-the-shelf machine learning algorithms. This makes them a competent alternative to BT models in reward modeling. + +# 4.2 HOW ANNOTATION QUALITIES AND QUANTITIES AFFECT PERFORMANCE + +![](images/25606ba2cfe73787d1b5ba038cb7129a73b1a1345e1f100f335c42e32c4de0c1.jpg) +Figure 2: Changing the annotation quality. Top row: Harmless; Bottom row: Helpful. Results with 5 seeds. + +![](images/e8a3b621a0dbe77930a04d3a11df4ce55ac9d49a576b641242568f07e7b17b8c.jpg) +Figure 3: Changing annotation availability. Top row: Harmless; Bottom row: Helpful. Results with 5 seeds. + +Experiment Setups. In this section, we systematically vary both the quality and quantity of annotations to empirically assess the performance of different reward models under diverse conditions. For annotation quality, we use the sigmoid instantiation of equation 8, i.e., $\xi (\Delta r) = \sigma (\beta \Delta r)$ , and vary the annotation quality parameter $\beta$ over the range [0.5, 0.7, 1.0, 3.0, 5.0, 10.0], using a fixed set of 40000 annotations for training. These $\beta$ values correspond to annotation error rates ranging from $5\%$ to $38\%$ , which aligns with realistic human annotations (Zheng et al., 2023; Coste et al., 2023; Dubois et al., 2024). To explore the impact of annotation quantity, we experiment with datasets containing [5000, 10000, 20000, 40000] annotations, while holding $\beta = 1$ constant. Additional results cross-sweeping those two set-ups can be found in Appendix F. + +Results and Takeaways. Figure 2 presents the results of varying annotation quality. From the plots, it is evident that as annotation error rates increase, the classification models exhibit greater robustness compared to the BT models, experiencing smaller performance drops. On the other hand, the BT models outperform classification models when annotation quality is high (i.e., less than $10\%$ wrong labels). Figure 3 shows the results of varying annotation quantities. We can conclude from the results that the classification models consistently outperform BT models, not only delivering superior performance with the same number of annotations but also demonstrating more consistent improvements as the number of annotations increases. + +# 4.3 LLM ALIGNMENT WITH PREFERENCE ANNOTATIONS BETWEEN DIFFERENT PROMPTS + +Experiment Setups. In this section, we examine how cross-prompt comparisons, as opposed to annotations on the same prompt, affect the performance of different reward models. Specifically, for each training prompt, two responses are randomly generated by the LLMs and then presented to annotators (the golden reward model) for preference labeling. In the standard reward modeling setup, the annotators label response pairs generated from the same prompt. In the cross-prompt annotation setup (denoted by the postfix X-Prompt in the legends), we randomly select two prompt-response pairs from the dataset for comparison and annotation. We present the results with $\beta = 1$ in the preference annotation processes and 40000 pairs of annotations in our main text as the most common setting. We provide full results over other experimental configurations in Appendix F. + +![](images/fc3f23076a9a43cc256ddd77c0e6a93c36236407aceaa9279a688547b40a9e31.jpg) +Figure 4: Results comparing cross-prompt comparison based annotations. Preference annotations on cross-prompt comparisons outperform same-prompt comparisons. Error bars are from 5 runs with different seeds. + +![](images/f48466dd29d0dcb2ae978ae9fdda75172d2acac0db796329b2f1343aa022417b.jpg) + +Results and Takeaways. Figure 4 shows the results across 2 datasets, 6 base models, and 3 different reward modeling methods. Shaded plots represent results from cross-prompt comparisons. The y-axis measures golden reward value improvements achieved through BoN sampling using the respective reward models. From these results, it is clear that cross-prompt comparisons outperform same-prompt comparisons, offering substantial improvements in reward model performance. + +![](images/41fa8390fca139285c4f7d83b4a9f16e5b01a5d15af40120b5fa11e039553a72.jpg) + +![](images/7c945bb54f3507acdb7ebb55bd205a18cf014729662f98ce1bf8f9bf9a7025bf.jpg) + +![](images/af5afbfb0c2e1dcc466647350c73187780b2ce172e3f903bb7156432a86ec363.jpg) +Figure 5: Results comparing cross-prompt comparison-based annotations on synthetically generated similar or diversified comparison pairs. Cross-prompt comparison significantly improves the performance of reward modeling with same-prompt response pairs lacking diversity. Error bars are from 5 runs with different seeds. + +![](images/3ad66940b9d4376d41dfba360631e08cda11341b367b96e41b0b2a97b1462e82.jpg) + +Further Investigation. Intuitively, the cross-prompt comparison can improve the quality of annotation since it increases the diversity in generation. To further explore this, we introduce two synthetic + +$\text{setup}^2$ designed to analyze the source of gains. In the Similar Comparison setup, response pairs generated for a single prompt exhibit similar quality and lack diversity. In the Diversified Comparison setup, response pairs for a single prompt are of different qualities. Specifically, we use a golden reward model to score 10 generated responses per prompt and select the two middle-ranked responses to simulate the Similar Comparison setup. Whereas in the Diversified Comparison setting, we use the highest and lowest-scored responses to construct the response pair for each prompt. + +Results and Takeaways. Figure 5 shows the results of those two synthetic setups. We can conclude that cross-prompt comparisons are essential when responses for a single prompt lack diversity. In the Similar Comparison setting, the conventional same-prompt comparisons often fail to produce informative reward models that can improve golden reward values. In contrast, cross-prompt annotations substantially enhance performance in such cases. In the Diversified Comparison setting, the need for cross-prompt comparisons diminishes as response diversity increases, though they do not have an evident negative impact on reward modeling performance. Comparing results across both settings, as well as results in Figure 4, we find the superiority of cross-prompt comparison is reflected in its general capability regardless of response diversities — the performance achieved by cross-prompt annotations is more stable and has lower dependence of the response diversities. + +![](images/880f961fefbd12dbc6b427058a418843903e7209eb3f1eac2a300980a25bb3e1.jpg) +Figure 6: Comparing the averaged absolute difference in scores in pairwise annotations (x-axis) and improvements achieved by using cross-prompt annotations (y-axis). The two variables are highly correlated. + +![](images/106f101ca305326ed25330519c7e73701355534da354f56ef43474d9e8a8fdde.jpg) + +Finally, we examine the relationship between the average absolute score differences in pairwise annotations (x-axis) across three setups (i.e., Random for randomly select two responses for each pair, Similar and Diverse for the two synthetic settings) and the corresponding performance improvements from cross-prompt annotations (y-axis). Scatter plots with linear fits are shown in Figure 6. The strong correlation indicates that cross-prompt annotations are most beneficial when same-prompt responses lack diversity. Importantly, we find the averaged reward differences between pairwise data in the synthetic setting of Similar cases and the Random settings are similar, this implies that in practice, when randomly selecting two responses for a single prompt to be annotated, those pairs are likely facing the challenge of high response similarity. And this is the case when cross-prompt annotation can be applied to improve performance. + +Conclusive Remark on Experiments Our experimental results highlight the advantages of classification-based reward models over traditional BT models, particularly in terms of flexibility and robustness to annotation quality and quantity. While BT models perform better under high-quality annotations, classification models demonstrate superior overall performance and resilience to increasing annotation error rates. Additionally, our empirical studies on cross-prompt comparisons show it significantly improves reward modeling, especially when responses to the same prompt lack diversity. Through synthetic experiments, we further demonstrate that the challenge of limited diversity is likely to occur in practice, providing additional justification for exploring this annotation method in future research. + +# ACKNOWLEDGMENT + +We sincerely appreciate the efforts of our anonymous reviewers, ACs, and SACs for their valuable feedback and constructive suggestions, which have made our paper more complete. We thank Ziping Xu and Jalaj Bhandari for their early-stage discussions and encouragement during the 1st RL Conference, which was crucial in motivating this paper. Additionally, we are grateful to Rui Yang and Ruocheng Guo for their insightful discussions that contributed to the development of this research. + +# REFERENCES + +Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert Schapire. Taming the monster: A fast and simple algorithm for contextual bandits. In International conference on machine learning, pp. 1638-1646. PMLR, 2014. +Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. A general theoretical paradigm to understand learning from human preferences. arXiv preprint arXiv:2310.12036, 2023. +Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. +Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. +Candice Bentéjac, Anna Csörgő, and Gonzalo Martínez-Muñoz. A comparative analysis of gradient boosting algorithms. Artificial Intelligence Review, 54:1937-1967, 2021. +U Bockenholt. A logistic representation of multivariate paired-comparison models. Journal of mathematical psychology, 32(1):44-63, 1988. +Thijs Bos and Johannes Schmidt-Hieber. Convergence rates of deep relu networks for multiclass classification. Electronic Journal of Statistics, 16(1):2724-2773, 2022. +Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345, 1952. +Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. Advances in neural information processing systems, 6, 1993. +Manuela Cattelan. Models for paired comparison data: A review with emphasis on dependent data. Statistical Science, 2012. +Xiaohong Chen and Demian Pouzo. Estimation of nonparametric conditional moment models with possibly nonsmooth generalized residuals. *Econometrica*, 80(1):277-321, 2012. +Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132, 2024. +Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. +Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. Reward model ensembles help mitigate overoptimization. arXiv preprint arXiv:2310.02743, 2023. +Geert De Soete and Suzanne Winsberg. A thurstonian pairwise choice model with univariate and multivariate spline transformations. Psychometrika, 58(2):233-256, 1993. + +Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023. +Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint arXiv:2405.07863, 2024. +Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems, 36, 2024. +Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024. +Lester R Ford Jr. Solution of a ranking problem from binary comparisons. The American Mathematical Monthly, 64(8P2):28-33, 1957. +Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835-10866. PMLR, 2023. +Duncan Guest, James S Adelman, and Christopher Kent. Relative judgement is relatively difficult: Evidence against the role of relative judgement in absolute identification. Psychonomic Bulletin & Review, 23:922-931, 2016. +Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321-1330. PMLR, 2017. +Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, et al. Direct language model alignment from online ai feedback. arXiv preprint arXiv:2402.04792, 2024. +Ruijian Han, Rougang Ye, Chunxi Tan, and Kani Chen. Asymptotic theory of sparse bradley-terry model. The Annals of Applied Probability, 30(5):2491-2515, 2020. +Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A Smith, Yejin Choi, and Hannaneh Hajishirzi. Unpacking dpo and ppo: Disentangling best practices for learning from preference feedback. arXiv preprint arXiv:2406.09279, 2024. +Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and TieYan Liu. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 2017. +Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. +Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. +Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. +Ziniu Li, Tian Xu, Yushun Zhang, Yang Yu, Ruoyu Sun, and Zhi-Quan Luo. Remax: A simple, effective, and efficient method for aligning large language models. arXiv preprint arXiv:2310.10505, 2023. +Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657, 2023. + +Zhihan Liu, Miao Lu, Shenao Zhang, Boyi Liu, Hongyi Guo, Yingxiang Yang, Jose Blanchet, and Zhaoran Wang. Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer. arXiv preprint arXiv:2405.16436, 2024. +R Duncan Luce. Individual choice behavior, volume 4. Wiley New York, 1959. +AI Meta. Introducing meta llama 3: The most capable openly available llm to date. Meta AI, 2024. +Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, et al. Nash learning from human feedback. arXiv preprint arXiv:2312.00886, 2023. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730-27744, 2022. +Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024. +Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with relu activation function. Annals of Statistics, 2020. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Roger N Shepard. Stimulus and response generalization: A stochastic model relating generalization to distance in psychological space. Psychometrika, 22(4):325-345, 1957. +Gordon Simons and Yi-Ching Yao. Asymptotics when the number of parameters tends to infinity in the bradley-terry model for paired comparisons. The Annals of Statistics, 27(3):1041-1060, 1999. +A Springall. Response surface fitting using a generalization of the bradley-terry paired comparison model. Journal of the Royal Statistical Society Series C: Applied Statistics, 22(1):59-68, 1973. +Neil Stewart, Gordon DA Brown, and Nick Chater. Absolute identification by relative judgment. Psychological review, 112(4):881, 2005. +Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020. +Hao Sun. Reinforcement learning in the era of llms: What is essential? what is needed? an rl perspective on rlhf, prompting, and beyond. arXiv preprint arXiv:2310.06147, 2023. +Hao Sun and Mihaela van der Schaar. Inverse-alignment: Inverse reinforcement learning from demonstrations for llm alignment. arXiv preprint arXiv:2405.15624, 2024. +Hao Sun, Alihan Hüyük, and Mihaela van der Schaar. Query-dependent prompt evaluation and optimization with offline inverse rl. In The Twelfth International Conference on Learning Representations, 2023. +Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749, 2024. +Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. +LL Thurstone. A law of comparative judgment. Psychological Review, 34(4), 1927. + +Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. +Hoang Tran and BH Chris Glaze. Snorkel-mistral-pairrm-dpo, 2024. +Amos Tversky and Daniel Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5:297-323, 1992. +Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl, 2020. +Weichen Wu, Brian W Junker, and Nynke Niezink. Asymptotic comparison of identifying constraints for bradley-terry models. arXiv preprint arXiv:2205.04341, 2022. +Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719, 2024. +Rui Yang, Ruomeng Ding, Yong Lin, Huan Zhang, and Tong Zhang. Regularizing hidden states enables learning generalizable reward model for llms. arXiv preprint arXiv:2406.10216, 2024a. +Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, and Jianshu Chen. Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment. arXiv preprint arXiv:2402.10207, 2024b. +Atsutomo Yara and Yoshikazu Terada. Nonparametric logistic regression with deep learning. arXiv preprint arXiv:2401.12482, 2024. +Yueqin Yin, Zhendong Wang, Yi Gu, Hai Huang, Weizhu Chen, and Mingyuan Zhou. Relative preference optimization: Enhancing llm alignment through contrasting responses across identical and diverse prompts. arXiv preprint arXiv:2402.10958, 2024. +Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrrh: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. +Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240, 2024. +Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023. +Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, et al. Secrets of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964, 2023. +Rui Zheng, Hongyi Guo, Zhihan Liu, Xiaoying Zhang, Yuanshun Yao, Xiaojun Xu, Zhaoran Wang, Zhiheng Xi, Tao Gui, Qi Zhang, et al. Toward optimal llm alignments using two-player games. arXiv preprint arXiv:2406.10977, 2024. +Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. + +# A RELATED WORK + +The Bradley-Terry Model in RLHF Hark back to the seminal paper on RLHF (Christiano et al., 2017), the motivation for using the Bradley-Terry-type model is to understand it as a specialization of the Luce-Shephard choice rule (Luce, 1959; Shepard, 1957). In a simplified version of the two-option setting, the Luce-Shephard choice rule says the probability of choosing a particular option from a set of alternatives is proportional to the utility assigned to that option. The application of such a rule is well aligned with the Bradley-Terry model Bradley & Terry (1952) where pairwise comparisons between players can be used to estimate their ability (utility) scores, where a game is inherently stochastic while the ability of a player is fixed. In later work of alignment from human feedback Stiannon et al. (2020); Ouyang et al. (2022); Rafailov et al. (2024), such a model is directly applied and has achieved great success in improving the quality of LLMs in various tasks when direct evaluation is impossible. + +The usage of the Bradley-Terry model in RLHF has been challenged in the literature from different perspectives: Munos et al. (2023) points that the Bradley-Terry model can not capture non-transitive preferences, and maximizing the corresponding Elo score can be a different objective as compared to the objective of optimizing preferences. Azar et al. (2023) points out that using the Bradley-Terry modelization in the direct preference optimization methods (Rafailov et al., 2024) could lead to problematic overfitting when sampled preferences are deterministic. + +Bradley-Terry Model for Parameter Estimation and Prediction The most classic Bradley-Terry model suitable for arena setting has been extensively studied theoretically started by Bradley & Terry (1952) themselves. Ford Jr (1957) established identifiability and asymptotic theory. More recently Wu et al. (2022) compared asymptotic theory under different identifiability assumptions. Simons & Yao (1999) studied Bradley-Terry when number of players diverge under the assumption number of competitions each players played also diverge. In contrast Han et al. (2020) studied the setting when comparisons are sparse and proposed a consistent procedure needs only $O(N \log^3 (N))$ comparisons. On prediction side, using features to predict ability was explored soon after Bradley & Terry (1952), Springall (1973) assumed ability was a linear combination of features while De Soete & Winsberg (1993) used spline functions. Bockenholt (1988) cast Bradley-Terry as a special logistic regression problem and opened investigation from this len. Chen & Pouzo (2012) developed theory for a general class of nonparametric models including logistic regression. On deep learning side Schmidt-Hieber (2020) studied asymptotic theory on deep neural network for nonparametric regression and a follow up paper Bos & Schmidt-Hieber (2022) studied nonparametric logistic regression using deep neural net. However these theory cannot be directly applied Bradley-Terry model because we need a unique reward model for all prompt-response pairs and the probability is not arbitrary but soft max from passing two pairs to a single network. Studies like Bos & Schmidt-Hieber (2022) could be seen as two pairs passing two arbitrary neural networks rather than one. + +Non-Pairwise Data and Cross-Prompt Comparisons in RLHF Non-pairwise data (Ethayarajh et al., 2024) and cross-prompt comparisons (Yin et al., 2024) were explored in the literature. The KTO (Ethayarajh et al., 2024) is rooted in prospect theory (Tversky & Kahneman, 1992) that risk-averse human prospective theories assign different values to losses and gains. And RPO (Yin et al., 2024) proposed to reflect the complex nature of human learning in alignment through involving response comparisons among both identical and similar questions. On one hand, RPO supports our insight into making cross-prompt comparisons in practice by drawing inspiration from human learning. On the other hand, the motivations and implementations between our work and RPO are both different: in our work, the cross-prompt annotation practice is motivated by a "why not" observation — given the fact that there is no specialty of considering single-prompt comparisons in our order-consistency theory and the BT model for predictions. Besides different motivations, there are several significant differences between RPO and our cross-prompt comparison: RPO does not study the annotation efficiency, or what is the optimal way of doing annotation under a budget, yet our focus is on studying the cross-prompt annotation as an alternative way of doing annotations. Moreover, RPO considers a strategic re-weighting method based on embedding space similarity among responses, leading to a direct alignment method. All of those works are direct alignment methods (Zheng et al., 2024; Liu et al., 2024; Zhao et al., 2023; Azar et al., 2023; Ethayarajh et al., 2024), which do not explicitly build reward models as their intermediate steps. One potential drawback of those methods without explicit reward models is that it will be challenging or impossible to + +conduct inference-time optimization (Liu et al., 2023) — especially when compared with our embedding based light-weighted reward models. And recent work has demonstrated the superiority of two stage methods over direct alignment methods (Ivison et al., 2024). + +Representation Learning in Reward Modeling In our work, we separate the reward model learning from the representation learning task and focus only on the reward modeling part to better isolate the source of gains. Recent advance in generative reward models highlight it is possible to use generation tasks to regularize the learned embeddings and improve reward modeling performance (Yang et al., 2024a). Also in Zhang et al. (2024), the generative verifiers construct reward predictions through next-token prediction to maximally leverage the ability of LLM as token generators to improve their evaluation ability. While our research on the reward models is orthogonal to this line of research, future exploration on combining either embedding learning or generative ability of LLMs with different order consistency objectives could be highly promising directions to enjoy the improvements from both sides. + +# B UNDERLYING ASSUMPTIONS OF APPLYING THE BRADLEY-TERRY MODEL IN PREFERENCE ANNOTATIONS + +Different from those prior works that challenge the practical application of the BT model Azar et al. (2023); Munos et al. (2023); Tang et al. (2024), we revisit the basic assumptions of modeling human preference annotations with the BT model, and answer the following questions: + +What are the underlying assumptions when we assume the BT model can be used to model the preference annotations? + +The canonical interpretation of how to apply equation 1 in preference-based learning is that: when randomly sampling a human annotator from the population, the human annotator's choice of the preferred response is proportional to the response's utility value. Formally, we use $x, y_1, y_2$ to denote the prompt and responses, the above interpretation implies the following assumptions: + +Assumption B.1 (Existence of a Deterministic Oracle Utility Value). Given $x$ , the utility value of any response $y$ exists and is deterministic. i.e., $r_{x,y}, \forall x, y$ is a deterministic real value. + +Definition B.2 (Individual Annotation under Bias). For annotator $A$ , the annotation result is deterministic and depends on the comparison of their biased evaluation of the utility values of both responses $y_{1}$ and $y_{2}$ , according to + +$$ +P _ {A} \left(y _ {1} \succ y _ {2} \mid x\right) = \mathbb {1} \left(r _ {x, y _ {1}} ^ {(A)} > r _ {x, y _ {2}} ^ {(A)}\right) = \mathbb {1} \left(r _ {x, y _ {1}} - r _ {x, y _ {2}} > b \left(x, y _ {1}, A\right) - b \left(x, y _ {2}, A\right)\right) \tag {15} +$$ + +Based on the above definitions, we can now explicitly write the assumption of applying the BT model to this randomized annotation process: + +Assumption B.3 (Logistic Difference Assumption). The $b(x, y_1, A) - b(x, y_2, A)$ is sampled iid from a standard logistic distribution for all $x, y, A$ : + +$$ +P (b (x, y _ {1}, A) - b (x, y _ {2}, A) \leq t) = \frac {1}{1 + e ^ {- t}} \tag {16} +$$ + +Remark B.4. (Transitive property of difference) A reader might be (rightfully) worried if this assumption is consistent with transitive property of difference, e.g., when considering multiple comparisons we have to have $b(x,y_1,A) - b(x,y_3,A) = b(x,y_1,A) - b(x,y_2,A) + b(x,y_2,A) - b(x,y_3,A)$ while knowing sum of (independent) logistic distributions is not logistic. One should be aware that the two terms being summed are not independent and the assumption can be achieved by assuming all annotator biases are independently Gumbel distributed with the same scale parameter. + +With this assumption, we arrive at the BT-type model + +Proposition B.5 (BT-type Model on Annotation under Logistic Difference Assumption). + +$$ +P \left(y _ {1} \succ y _ {2} \mid x\right) = P \left(r _ {x, y _ {1}} - r _ {x, y _ {2}} > b \left(x, y _ {1}, A\right) - b \left(x, y _ {2}, A\right)\right) = \frac {1}{1 + e ^ {- \left(r _ {x , y _ {1}} - r _ {x , y _ {2}}\right)}} \tag {17} +$$ + +In comparison, we can alternatively have the following assumption and have another model: + +Assumption B.6 (Gaussian Difference Assumption). The $b(x, y_1, A) - b(x, y_2, A)$ is sampled from a standard Gaussian distribution: + +$$ +b \left(x, y _ {1}, A\right) - b \left(x, y _ {2}, A\right) \sim \mathcal {N} (0, 1) \tag {18} +$$ + +Proposition B.7 (Model on Annotation under Gaussian Difference Assumption). + +$$ +P \left(y _ {1} \succ y _ {2} | x\right) = P \left(r _ {x, y _ {1}} - r _ {x, y _ {2}} > b \left(x, y _ {1}, A\right) - b \left(x, y _ {2}, A\right)\right) = \Phi \left(r _ {x, y _ {1}} - r _ {x, y _ {2}}\right), \tag {19} +$$ + +where $\Phi$ is the CDF of the Gaussian distribution. + +The next paragraph elaborates implications of those different assumptions. + +Understanding the BT Model from a Gaussian Assumption on Players' Performance in a Game Technically, the BT model assumes that a player's ability in a specific game can be represented by a score. To better understand the rationale behind the BT model, we can take a step back and consider a Gaussian assumption for game performances. Specifically, we assume that in each two-player game, a player's performance can be modeled as a Gaussian distribution centered on their score, with the variance of these distributions capturing the stochastic nature of the game and the variability in the players' performances. + +For instance, when player $A$ with a score of $S_A$ and variance $\sigma_A$ competes against player $B$ with a score of $S_B$ and variance $\sigma_B$ , the probability that $A$ defeats $B$ in a game (denoted as $A \succ B$ ), given the Gaussian assumption on performance, is + +$$ +P (A \succ B) = P \left(x _ {a} \geq x _ {b} \mid x _ {a} \sim N \left(S _ {A}, \sigma_ {A} ^ {2}\right), x _ {b} \sim N \left(S _ {B}, \sigma_ {B} ^ {2}\right)\right) = \frac {1}{2} + \frac {1}{2} \operatorname {e r f} \left(\frac {S _ {A} - S _ {B}}{\sqrt {2 \left(\sigma_ {A} ^ {2} + \sigma_ {B} ^ {2}\right)}}\right). \tag {20} +$$ + +This is known to be the Thurstonian model (Thurstone, 1927). In practice, if we further assume that the stochasticity characterized by performance variance is game-dependent rather than player-dependent, the denominator in equation 20 becomes a player-agnostic constant, which can be absorbed into the players' scores. In other words, the absolute values in the scoring system are scaled by the inherent variability of the game. + +In the BT model, the $\tanh (\cdot)$ rather than the error function $\operatorname{erf}(\cdot)$ is used, for the sake of a better empirical fit and mathematical convenience. Formally, the BT model is + +$$ +P (A \succ B) = \frac {1}{1 + e ^ {- \left(S _ {A} - S _ {B}\right)}} \tag {21} +$$ + +# C ASYMPTOTIC THEORY ON BT REWARD MODELING + +Recall that we have a known embedding function $\Psi(\cdot, \cdot) \mathcal{X} \times \mathcal{Y} \to [0,1]^d$ such that there exists an unknown function $g: \mathbb{R}^d \to \mathbb{R}$ and the true utility function $r(y,x) = g(\Psi(y,x))$ for all $y,x$ . We assume the embedding to have a range of $[0,1]$ , if the pretrained model do not have this range, we can scale it to be. Then reward modeling reduces to learning the function $g$ . Observe that under this formalism there is no need for a comparison to have the same prompt. + +# C.1 ANALYZING BT REWARD MODELING AS NON-PARAMETRIC LOGISTIC REGRESSION WITH ANTI-SYMMETRIC STRUCTURE + +Denote our reward model as $\hat{r}_{\theta}$ , parameterized by $\theta$ , when there is no confusion, we will abbreviate it as $\hat{r}$ . The reward difference between two pairs of prompts and responses $(x_{1},y_{1})$ $(x_{2},y_{2})$ is then $\hat{r}^{\Delta}(\Psi(x_{1},y_{1}),\Psi(x_{2},y_{2})) := \hat{r}(\Psi(x_{1},y_{1})) - \hat{r}(\Psi(x_{2},y_{2}))$ . We could have $x_{1} = x_{2}$ , i.e., having matched prompt but it is not necessary. And the predicted probability that $(x_{1},y_{1})$ better than $(x_{2},y_{2})$ is then $\sigma(\hat{r}^{\Delta}(\Psi(x_{1},y_{1}),\Psi(x_{2},y_{2}))$ with $\sigma$ being sigmoid function. This is the same as having a softmax over two rewards, i.e., $\sigma(\hat{r}^{\Delta}(\Psi(x_{1},y_{1}),\Psi(x_{2},y_{2}))) =$ + +$\mathrm{softmax}(\hat{r} (\Psi (x_1,y_1)),\hat{r} (\Psi (x_2,y_2)))$ . The second viewpoint is easier in applying theoretical analysis techniques developed in literature (Schmidt-Hieber, 2020; Bos & Schmidt-Hieber, 2022). To that end, we denote the vector of two rewards as $\hat{\pmb{r}}$ and the class probability is then $\mathrm{softmax}(\hat{r})$ Training such reward model reduce to train a classifier with cross entropy loss whose predicted conditional class probability being $\mathrm{softmax}(\hat{r} (\Psi (x_1,y_1)),\hat{r} (\Psi (x_2,y_2)))$ . Our theoretical development generally follow Bos & Schmidt-Hieber (2022). This is a special case of an MLP that could preserve symmetry such that if we exchange the role of $x_{1},y_{1}$ and $x_{2},y_{2}$ we get a negative difference. By showing this particular logit-Siamese architecture could approximate the true class probability, we can deduce that a BT reward model could approximate true reward. + +For notational convenience, we denote the embedding of the $i$ th pair as $\Psi_1^{(i)}, \Psi_2^{(i)}$ for $i = 1, \ldots, n$ , without lose of generality we assume $\Psi_i^{(i)} \in [0,1]^d$ . Denote the true preference probability as $\pmb{p}_0^{(i)}$ and model predicted as $\hat{\pmb{p}}^{(i)} = (\sigma(\hat{r}^{\Delta}(\Psi_1^{(i)}, \Psi_2^{(i)})), 1 - \sigma(\hat{r}^{\Delta}(\Psi_1^{(i)}, \Psi_2^{(i)}))) = \text{softmax}(\hat{r}(\Psi_1^{(i)}, \Psi_2^{(i)}))$ denote the preference vector as $\pmb{h}^{(i)}$ , equals to $(1,0)$ if the first response pair is preferred and $(0,1)$ otherwise. The BT model is reduced to a (pairwise) classification problem such that the likelihood is given by + +$$ +\widetilde {\mathcal {L}} _ {\mathrm {C E}} (\boldsymbol {p}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\boldsymbol {h} ^ {(i)}\right) ^ {\top} \log \left(\boldsymbol {p} ^ {(i)}\right), \quad \hat {\boldsymbol {p}} = \underset {\boldsymbol {p} \in \mathcal {F} _ {\theta}} {\arg \min } \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\boldsymbol {p}) +$$ + +It is unrealistic to assume we can find an NN that actually attends the global minimum, we denote the difference between the fitted NN and the global minimum as + +$$ +\Delta_ {n} (\pmb {p} _ {0}, \hat {\pmb {p}}) = \mathbb {E} \left[ \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\hat {\pmb {p}}) - \min _ {\pmb {p} \in \mathcal {F} _ {\theta}} \widetilde {\mathcal {L}} _ {\mathrm {C E}} (\pmb {p}) \right] +$$ + +We consider truncated KL risk, similar to Bos & Schmidt-Hieber (2022) to overcome the divergence problem of KL risk. + +Definition C.1 (Truncated KL risk (Bos & Schmidt-Hieber, 2022)). The $B$ -truncated KL risk for a probability estimator $\hat{\pmb{p}}$ + +$$ +R _ {B} \left(\boldsymbol {p} _ {0}, \hat {\boldsymbol {p}}\right) = \mathbb {E} \left[ \boldsymbol {p} _ {0} ^ {\top} \min \left(B, \log \frac {\boldsymbol {p} _ {0}}{\hat {\boldsymbol {p}}}\right) \right] \tag {22} +$$ + +We consider MLP with ReLU activations for $\mathcal{F}_{\theta}$ , depends on depth $L$ and width vector $\boldsymbol{m} = (m_0, \dots, m_L)$ i.e., + +$$ +\mathcal {F} (L, \boldsymbol {m}) = \left\{f: \mathbb {R} ^ {m _ {0}} \rightarrow \mathbb {R}, \boldsymbol {x} \rightarrow f (\boldsymbol {x}) = W _ {L} \psi_ {\boldsymbol {v} _ {L}} W _ {L - 1} \dots W _ {1} \psi_ {\boldsymbol {v} _ {1}} W _ {0} \boldsymbol {x} \right\} +$$ + +where $\psi_{\pmb{v}}(\pmb{x}) = \max (\pmb{x} - \pmb{v},0)$ is the ReLU activation with bias $v$ + +Assumption C.2 (MLP reward model). We will further assume the network parameters having norm restriction and sparsity, a common assumption in studying MLP for classification problems (Yara & Terada, 2024; Bos & Schmidt-Hieber, 2022). That is, in this work we consider networks from the family + +$$ +\mathcal {F} (L, \boldsymbol {m}, s) := \left\{f: f \in \mathcal {F} (L, \boldsymbol {m}), \max _ {j = 0, \dots , L} \max \left(\left| \left| W _ {j} \right| \right| _ {\infty}, \left| \boldsymbol {v} _ {j} \right| _ {\infty}\right) \leq 1, \sum_ {j = 0} ^ {L} \left(\left| \left| W _ {j} \right| \right| _ {0} + \left| \boldsymbol {v} _ {j} \right| _ {0}\right) \leq s \right\} \tag {23} +$$ + +Another useful function class in theory is the softmax output version of the reward model, i.e., consider + +$$ +\mathcal {F} _ {\sigma} (L, \boldsymbol {m}, s) := \left\{\boldsymbol {p} \left(\Psi_ {1}, \Psi_ {2}\right): \boldsymbol {p} = \operatorname {s o f t m a x} \left(f \left(\Psi_ {1}\right), f \left(\Psi_ {2}\right)\right), f \in \mathcal {F} (L, \boldsymbol {m}, s) \right\} +$$ + +Next we assume the probability of preference is not too close to 0 or 1, in the form of a small value bound + +Definition C.3 (Small value bound by Bos & Schmidt-Hieber (2022)). Let $\alpha \geq 0$ and $\mathcal{H}$ is a function class we say $\mathcal{H}$ is $\alpha$ -small value bounded ( $\alpha$ -SVB) if there exists a constant $C > 0$ s.t. for all probability estimated $\pmb{p} = (p_0, p_1) \in \mathcal{H}$ it holds that + +$$ +\mathbb {P} \left(p _ {k} \left(\Psi_ {1}, \Psi_ {2}\right) \leq t\right) \leq C t ^ {\alpha}, \quad \text {f o r a l l} t \in (0, 1 ] \text {a n d a l l} k = 0, 1 \tag {24} +$$ + +which indices that our reward function and design of comparisons should have a tail behavior that we do not tend to compare pairs having very different reward function. We denote this family as + +$$ +\mathcal {S} (\alpha , C) := \left\{g: \mathbb {P} _ {\Psi_ {1}, \Psi_ {2}} \left[ g \left(\Psi_ {1}\right) - g \left(\Psi_ {2}\right) \leq \sigma^ {- 1} (t) \right] \leq C t ^ {\alpha} \text {f o r a l l} t \in (0, 1 ] \right\} \tag {25} +$$ + +Definition C.4 (Hölder smooth function). For $\beta > 0$ and $D \subset \mathbb{R}^d$ , the ball of $\beta$ -Hölder functions with radius $Q$ is defined as + +$$ +C ^ {\beta} (D, Q) := \left\{f: D \rightarrow \mathbb {R}: \sum_ {\gamma : | | \gamma | | _ {1} < \beta} \| \partial^ {\gamma} f \| _ {\infty} + \sum_ {\gamma : | | \gamma | | _ {1} = \lfloor \beta \rfloor} \sup _ {\boldsymbol {x} \neq \boldsymbol {y} \in D} \frac {\left| \partial^ {\gamma} f (\boldsymbol {x}) - \partial^ {\gamma} f (\boldsymbol {y}) \right|}{\left| | \boldsymbol {x} - \boldsymbol {y} | | _ {\infty} ^ {\beta - \lfloor \beta \rfloor}} \leq Q \right\} \tag {26} +$$ + +Assumption C.5 (Class of true utility function). We assume that the true reward functions are $\beta$ -Hölder and the induced probability by softmax is $\alpha$ -SVB. I.e., we consider the function class + +$$ +\mathcal {G} _ {\alpha} (\beta , Q, C) = C ^ {\beta} (D, Q) \cap \mathcal {S} (\alpha , C) \tag {27} +$$ + +Note that this is nonempty since constant $g$ satisfy the requirement for any $\beta > 0, \alpha > 0$ with $Q > 1/2$ and $C \geq 2^{\alpha}$ . + +Theorem C.6 (Theorem 5 of Schmidt-Hieber (2020), approximating Hölder smooth functions). For every function $f \in C^{\beta}(D, Q)$ and every $M > \max((\beta + 1)^{\beta}, (Q + 1)^{\beta / d} e^{\beta})$ there exist a neural network $H \in \mathcal{F}(L, m, s)$ with $L = 3[\log_2(M)(d / \beta + 1)(1 + [\log_2(\max(d, \beta)])], m = (d, 6(d + [\beta]) \lfloor M^{d / \beta} \rfloor, \ldots, 1)$ and $s \leq 423(d + \beta + 1)^{3 + d} M^{d / \beta} \log_2(M)(d / \beta + 1)$ such that + +$$ +| | H - f | | _ {\infty} \leq \frac {C _ {Q , \beta , d}}{M} +$$ + +Remark C.7. Note that since softmax with two output is Lipschetz with constant 2, we have $L_{\infty}$ distance between the softmax output being bounded by $2C_{Q,\beta,d} / M$ by applying the above theorem. + +Theorem C.8 (Oracle inequality, theorem 3.5 Bos & Schmidt-Hieber (2022)). Let $\mathcal{F}$ be a class of conditional class probabilities and $\hat{p}$ be any estimator taking value in $\mathcal{F}$ , If $B \geq 2$ and $\mathcal{N}_n = \mathcal{N}(\delta, \log(\mathcal{F}), d_\tau(\cdot, \cdot)) \geq 3$ for $\tau = \log(C_n e^{-B} / n)$ , then + +$$ +\begin{array}{l} R _ {B} \left(p _ {0}, \hat {p}\right) \leq (1 + \epsilon) \left(\inf _ {p \in \mathcal {F}} + \Delta_ {n} \left(p _ {0}, \hat {p}\right) + 3 \delta\right) \tag {28} \\ + \frac {(1 + \epsilon) ^ {2}}{\epsilon} \frac {6 8 B \log (\mathcal {N} _ {n}) + 2 7 2 B + 3 C _ {n} (\log (n / C _ {n}) + B)}{n} \\ \end{array} +$$ + +for all $\delta, \epsilon \in (0,1]$ and $0 < C_n \leq n / e$ . + +Lemma C.9 (Adapted lemma 3.8 of (Bos & Schmidt-Hieber, 2022)). Let $V = \prod_{\ell=0}^{L+1}(m_{\ell} + 1)$ , then for every $\delta > 0$ + +$$ +\mathcal {N} (\delta , \log (\mathcal {F} _ {\sigma} (L, \boldsymbol {m}, s)), \| \cdot \| _ {\infty}) \leq (8 \delta^ {- 1} (L + 1) V ^ {2}) ^ {s + 1} \tag {29} +$$ + +and + +$$ +\log \mathcal {N} (\delta , \log (\mathcal {F} _ {\sigma} (L, \boldsymbol {m}, s)), \| \cdot \| _ {\infty}) \leq (s + 1) \log \left(2 ^ {2 L + 7} \delta^ {- 1} (L + 1) d ^ {2} s ^ {L}\right) \tag {30} +$$ + +Substitute to the first bound and take log yield the second line. + +Note that, although the proof largely carries follow Bos & Schmidt-Hieber (2022) we cannot directly apply lemma 3.8 of Bos & Schmidt-Hieber (2022) in our setting because before softmax layer the two one-dimensional scores came from the same MLP with identical activation. This is only a subset of the family Bos & Schmidt-Hieber (2022) considered in their work. + +Proof. For $g \in \log(\mathcal{F}_{\sigma}(L, \boldsymbol{m}, s))$ , there exists an $f_g \in \mathcal{F}(L, \boldsymbol{m}, s)$ such that $g(\boldsymbol{x}_1, \boldsymbol{x}_2) = \log(\operatorname{softmax}(f_g(\boldsymbol{x}_1), f_g(\boldsymbol{x}_2)))$ . + +By Lemma 5 of Schmidt-Hieber (2020), we have + +$$ +\mathcal {N} (\delta / 4, \mathcal {F} (L, \boldsymbol {m}, s), \| \cdot \| _ {\infty}) \leq \left(8 \delta^ {- 1} (L + 1) V ^ {2}\right) \tag {31} +$$ + +Let $\delta > 0$ , denote $\{f_j\}_{j=1}^{\mathcal{N}}$ the centroid of a minimum $\delta/4$ cover of $\mathcal{F}(L, m, s)$ , thus for each $f_j \in \mathcal{F}(L, m, s)$ , there exists a $\hat{f}_j$ s.t., $\hat{f}_j$ 's are interior of a $\delta/2$ -cover of $\mathcal{F}(L, m, s)$ for a $g \in$ + +$\log (\mathcal{F}_{\sigma}(L,\boldsymbol {m},s)$ , by covering property there exists a $j$ s.t. $\| f_{g} - \hat{f}_{j}\|_{\infty}\leq \delta /2$ , by proposition C.6 of Bos & Schmidt-Hieber (2022), we will abbreviate $f_{g}(x_{i})$ as $f_{g}^{i}$ + +$$ +\begin{array}{l} \left\| g - \log \left(\operatorname {s o f t m a x} \left(\hat {f} _ {j} ^ {1}, \hat {f} _ {j} ^ {2}\right)\right) \right\| _ {\infty} = \left\| \log \left(\operatorname {s o f t m a x} \left(f _ {g} ^ {1}, f _ {g} ^ {2}\right)\right) - \log \left(\operatorname {s o f t m a x} \left(\hat {f} _ {j} ^ {1}, \hat {f} _ {j} ^ {2}\right)\right) \right\| _ {\infty} \\ \leq 2 \left| \left| \left[ f _ {g} ^ {1}, f _ {g} ^ {2} \right] - \left[ \hat {f} _ {j} ^ {1}, \hat {f} _ {j} ^ {2} \right] \right| \right| _ {\infty} \leq 2 \max _ {i} \left| \left| f _ {g} ^ {i} - \hat {f} _ {j} ^ {i} \right| \right| \leq \delta \tag {32} \\ \end{array} +$$ + +Since $g$ arbitrary, we have $\log (\mathrm{softmax}(\hat{f}_j^1,\hat{f}_j^2))$ is an internal $\delta$ cover of $\log (\mathcal{F}_{\sigma}(L,\boldsymbol {m},s)$ , hence + +$$ +\mathcal {N} (\delta , \log (\mathcal {F} _ {\sigma} (L, \boldsymbol {m}, s)), \| \cdot \| _ {\infty}) \leq \mathcal {N} (\delta / 4, \mathcal {F} (L, \boldsymbol {m}, s), \| \cdot \| _ {\infty}) \leq 8 \delta^ {- 1} (L + 1) V ^ {2} \tag {33} +$$ + +The second bound is by having $m_0 = d$ , $m_{L + 1} = 1$ (since we have scalar reward) and removing inactive nodes we have by proposition A1 of Bos & Schmidt-Hieber (2022) $V \leq ds^{L}2^{L + 2}$ . + +Remark C.10. Readers might suspect this is a direct result of Bos & Schmidt-Hieber (2022) by first concatenating two embeddings and training an MLP with this joint embedding to predict softmax score. While this proposal will satisfy the requirements for the theory, it does not provide a way to generate a reward for one embedding that does not depend on another embedding and might not be antisymmetric. Theoretically, the rate depends on embedding dimension $d$ rather than the concatenated dimension $2d$ as directly using results from Bos & Schmidt-Hieber (2022). + +Theorem C.11 (Truncated KL risk bound). Suppose the true utility function induced probability of preference is in $\mathcal{G}_{\alpha}(\beta, Q, C)$ for $\alpha \in [0,1]$ and $\phi_{n} = 2^{\frac{(1 + \alpha)\beta + (3 + \alpha)d}{(1 + \alpha)\beta + d}} n^{-\frac{(1 + \alpha)\beta}{(1 + \alpha)\beta + d}}$ . Let $\hat{\pmb{p}}$ be an estimator from family $\mathcal{F}_{\sigma}(L, \pmb{m}, s)$ satisfying 1) $A(d,\beta) \log_2(n) \leq L \lesssim n\phi_n$ for some suitable constant $A(d,\beta)$ , 2) $\min_i m_i \gtrsim n\phi_n$ and 3) $s \asymp n\phi_n \log(n)$ . For sufficiently large $n$ , there exists constants $C', C''$ such that when $\Delta_n(\hat{p}, p_0) \leq C'' B\phi_n L \log^2(n)$ then + +$$ +R _ {B} \left(\boldsymbol {p} _ {0}, \hat {\boldsymbol {p}}\right) \leq C ^ {\prime} B \phi_ {n} L \log^ {2} (n) \tag {34} +$$ + +where $a \lesssim b$ means there exists some constant $C$ s.t. $a \leq Cb$ and $a \asymp b$ means $a \lesssim b$ and $b \lesssim a$ . + +Proof. We apply oracle bound Theorem C.8. Take $\delta = n^{-1}$ and $\epsilon = C_n = 1$ , using the fact that $d_{\tau}$ is upper bounded by sup-norm. Then apply Lemma C.9, we have + +$$ +\begin{array}{l} R _ {B} (\boldsymbol {p}, \hat {\boldsymbol {p}}) \leq 2 \left(\inf _ {\boldsymbol {p}} R _ {B} (\boldsymbol {p} _ {0}, \boldsymbol {p}) + \Delta_ {n} (\boldsymbol {p} _ {0}, \hat {\boldsymbol {p}} + \frac {3}{n})\right) \\ + 4 \cdot \frac {6 8 B (s + 1) \log \left(2 ^ {2 L + 7} \delta^ {- 1} (L + 1) d ^ {2} s ^ {L}\right) + 2 7 2 B + 3 (\log (n) + B)}{n} \tag {35} \\ \end{array} +$$ + +We pick $M = \lfloor c2^{\frac{(2 + \alpha)\beta}{(1 + \alpha)\beta + d}}n^{\frac{\beta}{(1 + \alpha)\beta + d}}\rfloor$ for small small $c$ , with large enough $n$ , we apply Theorem C.6, its softmax transformed version denote as $\tilde{p}$ is in $\mathcal{F}_{\sigma}(L,\boldsymbol {m},s)$ with $L =$ and maximim width bounded by $\lesssim M^{d / \beta} = c^{d / \beta}n\phi_n$ , and similarly we have $s\lesssim c^{d / \beta}M^{d / \beta}\log_2(M) = c^{d / \beta}n\phi_n\log_2(M)$ . Whenever we have $A(d,\beta)\log_2(n)\leq L\lesssim n\phi_n$ we have the maximum width is $\gtrsim n\phi_{n}$ and $s\succ n\phi_{n}\log (n)$ . Observe that the softmax output network satisfy Theorem 3.2 of Bos & Schmidt-Hieber (2022), we have with $C_1 = 4(4 + 2C_{Q,\beta ,d})$ . The 2 before $C_{Q,\beta ,d}$ different to the exact statement of Theorem 3.2 of Bos & Schmidt-Hieber (2022) is because our $C_{Q,\beta ,d}$ is before softmax layer and since softmax layer is Lipschetz of constant 2. We further have $C_1 + 1\leq 4(5 + 2C_{Q,\beta ,d})$ , we have + +$$ +\inf _ {\boldsymbol {p} \in \mathcal {F} _ {\sigma}} R (\boldsymbol {p} _ {0}, \boldsymbol {p}) \leq 8 C 2 ^ {3 + \alpha} \frac {(5 + 2 C _ {Q , \beta , d}) ^ {3}}{M ^ {1 + \alpha}} \left(1 + \frac {I _ {\alpha < 1}}{1 - \alpha} + \log (M)\right) \lesssim \phi_ {n} \log (n) +$$ + +Together with oracle bound Equation (35) and $s \asymp n\phi_n \log(n)$ , the statement follows. + +Lemma C.12 (Lemma 3.4 of Bos & Schmidt-Hieber (2022)). For any $B \geq 2$ , $P, Q$ being probability measure on the same measure space, we have + +$$ +H ^ {2} (P, Q) \leq \frac {1}{2} K L _ {B} (P, Q) \tag {36} +$$ + +where for discrete probabilities + +$$ +H ^ {2} (P, Q) = \sum_ {j} (\sqrt {P _ {j}} - \sqrt {Q _ {j}}) ^ {2} +$$ + +Remark C.13 (Connecting probability to reward). Since we have $(\sqrt{a} - \sqrt{b})^2 = (a - b)^2 / (\sqrt{a} + \sqrt{b})^2$ , use Lemma C.12, indicates that in large subsets of the embedding space + +$$ +\left| p _ {0} \left(\Psi_ {1}, \Psi_ {2}\right) - \hat {p} \left(\Psi_ {1}, \Psi_ {2}\right) \right| \lesssim \left| \sqrt {p _ {0}} + \sqrt {\hat {p}} \right| \sqrt {\phi_ {n} L} \log (n) +$$ + +$$ +\left| r \left(\Psi_ {1}\right) - r \left(\Psi_ {2}\right) - \left(\hat {r} \left(\Psi_ {1}\right) - \hat {r} \left(\Psi_ {2}\right)\right) \right| \lesssim \frac {\left| \sqrt {p _ {0}} + \sqrt {\tilde {p}} \right|}{\tilde {p} (1 - \tilde {p})} \sqrt {\phi_ {n} L} \log (n) +$$ + +where $\tilde{p}$ is a probability between $p_0$ and $\hat{p}$ , the second line is due to mean value theorem. This indicates that the comparison should be between those pairs that relatively close in reward to avoid diverging behavior of logit function. + +# D ANALYZING ORDER CONSISTENCY + +Proposition 3.3 (Lower bound on population level order consistency). Suppose a learned model $\hat{H}$ achieves objective equation 9 up to $1 - \delta \epsilon$ error for some small $0 < \delta < 1$ and $\epsilon < 3/20$ , i.e., + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2}, h} \mathbb {1} [ h = \hat {H} ] \geq 1 - \delta \epsilon \tag {10} +$$ + +Then, with probability at least $1 - \delta$ over $\Delta r$ , for any given $\Delta r$ the order consistency of $\hat{r}$ with respect to the oracle utility is bounded below by: + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2}} \left[ \mathbb {1} \left(\hat {H} \cdot [ r (x _ {1}, y _ {1}) - r (x _ {2}, y _ {2}) ] \geq 0\right) | \Delta r \right] \geq (1 - \epsilon) \cdot \xi^ {2} (\Delta r) + \epsilon \cdot (1 - \xi (\Delta r)) ^ {2} \tag {11} +$$ + +Further if we assume that with probability at least $1 - \kappa$ , that $\xi(\Delta r) \geq \sqrt{\epsilon^2 + 1 - 3\epsilon} + \epsilon$ , we have + +$$ +\mathbb {E} _ {x _ {1}, x _ {2}, y _ {1}, y _ {2} \sim \ell (x)} \left[ \mathbb {1} \left(\hat {H} \cdot [ r (x _ {1}, y _ {1}) - r (x _ {2}, y _ {2}) ] > 0\right) \right] \geq 1 - 4 \epsilon - \kappa - \delta \tag {12} +$$ + +Proof. The idea of the proof is to first use Markov's inequality to bound probability that for a given distance $\Delta r$ the preference model not well approximate the annotator and under the event that the preference model approximate the annotator well, we bound the total error combined by preference model and annotator. + +By assumption we have the (marginal) error probability averaging over all distances $r$ is + +$$ +\begin{array}{l} \mathbb {P} _ {x, y _ {1}, y _ {2}, h} \left[ \mathbb {1} (\hat {H} \neq h) \right] \\ = \mathbb {E} _ {\Delta r} \left[ \mathbb {P} \left(\hat {H} \neq h \mid \Delta r\right) \right] \tag {37} \\ < \delta \epsilon \\ \end{array} +$$ + +Denote the random variable + +$$ +\Pi_ {r} := \mathbb {P} \left(\hat {H} \neq h \mid \Delta r\right) \tag {38} +$$ + +by Markov's inequality + +$$ +\mathbb {P} _ {r} \left(\Pi_ {r} \geq \epsilon\right) \leq \frac {\delta \epsilon}{\epsilon} = \delta \tag {39} +$$ + +In the event $\{\Delta r:\mathbb{P}\left(\hat{H}\neq h\bigg|\Delta r\right) < \epsilon \}$ , with probability $1 - \delta$ we bound the error rate as function of $\Delta r$ . Condition on $\Delta r$ , define the following probabilities: + +- $p_{\text{annotator}} = \xi(\Delta r)$ is the probability that the annotator $h$ is correct (i.e., agrees with the oracle utility) given the oracle distance. +- $1 - p_{\text{annotator}} = 1 - \xi(\Delta r)$ is the probability that the annotator $\mathcal{H}_{\beta}$ is incorrect given the oracle distance. + +Given the bounded difference between $\hat{H}_{\theta}$ and $H_{\beta}$ : + +- Correct Case: When the annotator is correct, the learned model agrees with the annotator with probability at least $1 - \epsilon$ . Thus: + +$$ +p _ {\text {c o r r e c t}} \geq (1 - \epsilon) \cdot \xi (\Delta r). \tag {40} +$$ + +- Incorrect Case: When the annotator is incorrect, the learned model agrees with the annotator with probability at most $\epsilon$ . Thus: + +$$ +p _ {\text {i n c o r r e c t}} \leq \epsilon \cdot (1 - \xi (\Delta r)). \tag {41} +$$ + +The order consistency of the learned model $\tilde{H}_{\theta^{*}}$ with the oracle utility can be expressed as: + +$$ +\mathbb {E} _ {x, y _ {1}, y _ {2} \sim \ell (x)} \left[ \mathbb {1} \left(\hat {H} \left(r \left(y _ {1}, x\right) - r \left(y _ {2}, x\right)\right) \geq 0\right) \mid \Delta r \right] = p _ {\text {c o r r e c t}} \cdot p _ {\text {a n n o t a t o r}} + p _ {\text {i n c o r r e c t}} \cdot \left(1 - p _ {\text {a n n o t a t o r}}\right). \tag {42} +$$ + +Substituting the bounds and simplifying, we have + +$$ +\mathbb {E} _ {x, y _ {1}, y _ {2} \sim \ell (x)} \left[ \mathbb {1} \left(\hat {H} \left(r \left(y _ {1}, x\right) - r \left(y _ {2}, x\right)\right) \geq 0\right) \mid \Delta r \right] \geq (1 - \epsilon) \cdot \xi^ {2} (\Delta r) + \epsilon \cdot (1 - \xi (\Delta r)) ^ {2}. \tag {43} +$$ + +Second part of the proof is by observing that $\xi (\Delta r)\geq \sqrt{\epsilon^2 + 1 - 3\epsilon} +\epsilon$ implies $(1 - \epsilon)\cdot \xi^{2}(\Delta r) + \epsilon \cdot (1 - \xi (\Delta r))^{2}\geq 1 - 4\epsilon$ when $\epsilon < 3 / 20$ , consider this conditional bound + +$$ +\mathbb {E} _ {x, y _ {1}, y _ {2} \sim \ell (x)} \left[ \mathbb {1} \left(\hat {H} \left(r \left(y _ {1}, x\right) - r \left(y _ {2}, x\right)\right) \geq 0\right) \mid \Delta r \right] \geq 1 - 4 \epsilon \tag {44} +$$ + +the stated bound fail either 1) $\Pi_r > \epsilon$ , with probability at most $\delta$ or 2) $\xi(\Delta r) < \sqrt{\epsilon^2 + 1 - 3\epsilon} + \epsilon$ with probability at most $\kappa$ , thus the stated bound is true with probability at least $1 - \kappa - \delta$ due to union bound on failure modes. + +Then by definition of conditional probability, the bound in theorem true. + +![](images/85b7e26f8e7e203d0a159a31f0c81be0f026901359e841250a2f504d542d853c.jpg) + +Proposition D.1 (Classification reward). Suppose data actually coming from BT model Equation (1), and the score $s_i \coloneqq \log \mathrm{it} P(i \text{ wins})$ is connected to BT reward that for a constant $C$ does not depend on $i$ + +$$ +s _ {i} \geq r _ {i} - C +$$ + +Proof. We condition on which $j$ that $i$ competed with and apply Jensen's inequality + +$$ +\mathbb {P} (i \text {w i n s}) = \mathbb {E} _ {j} [ \mathbb {P} (i \succ j | j) ] = \mathbb {E} _ {j} \left[ \frac {u _ {i}}{u _ {i} + u _ {j}} \right] \geq \frac {u _ {i}}{u _ {i} + \mathbb {E} [ u _ {j} ]} +$$ + +With some straightforward algebra, we have that + +$$ +\frac {\mathbb {P} (i \text {w i n s})}{1 - \mathbb {P} (i \text {w i n s})} \mathbb {E} [ u _ {j} \geq u _ {i} ] +$$ + +Take log at each side and substitute $u_{i} = \exp (r_{i})$ then rearrange + +$$ +s _ {i} := \operatorname {l o g i t} \mathbb {P} (i \text {w i n s}) \geq r _ {i} - \log \mathbb {E} [ \exp (r _ {j}) ] +$$ + +we have $\log \mathbb{E}[\exp (r_j)]$ is a constant. + +![](images/bae7e960e091a57b33330fdddb4f753bcc52c79cb1b6109779875223db1b961a.jpg) + +# EXPERIMENT DETAILS + +To enhance the reproducibility of our work, all code, datasets (demonstrations), fine-tuned LLMs, generated training and test responses, annotations of those responses, and their embeddings will be made publicly available. + +Computational Efficient Experiment Design and Reproducibility Our experiments are conducted on a cluster having 128 Intel(R) Xeon(R) Platinum 8336C CPUs @2.30GHz with NVIDIA V100 32GB or NVIDIA A100 80G GPU nodes. We use vllm (Kwon et al., 2023) to accelerate the LLM generation process. + +To reproduce our 12,000 experiment results (since we will release the embeddings of the generated responses), only CPUs are needed, and to reproduce all experiments, 6000h CPU-core hours are needed — on a machine with 128 CPU cores, it will take 50 hours to reproduce all of our 12000 experiments (This includes our 5 repeated runs using different random seeds. Running with only 1 seed will take less than 10 hours). Each set-up takes less than $30\mathrm{min}$ CPU-core time usage — less than 1 minutes to finish on a 128-core server. + +Supervised Fine Tuning Stage of Base Models Following Stiennon et al. (2020); Bai et al. (2022a); Sun & van der Schaar (2024), we use held-out demonstration datasets generated by GPT4 on the two tasks to conduct SFT on the three base models we use (Gemma2b, Gemma7b, LLaMA3-8b). Such an SFT procedure generates three SFT-ed checkpoints as additional base LLMs, the Gemma2b-SFT, Gemma7b-SFT, LLaMA3-8b-SFT. The SFT takes less than 10 hours (4 hours for the 2b models) using A100 GPUs and the TRL framework (von Werra et al., 2020). + +Training and Test Data (Responses) Creation The Harmless dataset contains 41876 training prompts and 2273 test prompts. The Helpful dataset contains 42846 training prompts, and 2292 test prompts. In our experiment, for each of the 6 base LLMs, we create 10 responses on each training prompt as candidates for reward model training. For each of the test prompts, we create 500 responses and annotate their golden utilities using the golden reward models for testing. + +Creating Embeddings We use Gemma2b (Team et al., 2024) to generate embeddings for reward modeling in our experiments. Since we have 6 base LLMs to generate responses and 2 datasets, creating the embeddings on those generations requires GPUs. We use V100 GPUs with 32GB memory for embedding generation. Each setting (40000 prompts with 10 generations and 2000 prompts with 500 generations) takes around 16 GPU hours to finish. + +Simulated Preference Annotation with Golden Reward Models To simulate the imperfect annotation process of human labors, we consider label noises in our experiments following the literature (Ziegler et al., 2019; Dubois et al., 2024; Coste et al., 2023). However, instead of randomly inject noise to the labeling process, we consider a more realistic annotation simulation using the cognitive bottleneck models studied in psychology (Stewart et al., 2005; Guest et al., 2016): the comparisons made between responses that has similar scores will have a higher possibility of being mis-labeled, formally, we have (equation 8) + +$$ +\mathbb {P} \left(h (x, y _ {1}, y _ {2}) (r (x, y _ {1}) - r (x, y _ {2})) > 0 \Big | \Delta r\right) = \xi (\Delta r), +$$ + +We instantiate $\xi (\Delta r)$ as $\sigma (\beta \Delta r)$ , the sigmoid function in determining the probability of getting a correct lable. The $\beta$ parameter here controls the annotation quality: when $\beta = 0$ , annotations are purely random, while when $\beta \rightarrow \infty$ , annotations are perfect. In our experiments, the default setting of $\beta$ is 1 unless explicitly specified otherwise (as in the section of experiments studing performance under different noise levels in annotations.) + +Hyper-Parameters of the LGB models and MLPs To maximally isolate the source of gains, and separate the contribution of methods from their implementations, we use the identical hyperparameter set up for all experiments (a single default hyper-parameter setup for LightGBM, same MLP configurations for BT and Classification). + +For LGB models, we use the default hyper-parameter setting of + +```javascript +hyper-param-lgb $=$ { 'objective': 'binary', 'metric': 'binary_logloss' } +``` + +For MLPs, we use a minimalism three-layer-feed-forward structure of +```python +hyper-param-mlp = { 'activation': 'ReLU', 'units': '(1024, 512, 1)', 'loss': 'BCELoss', 'optimizer': 'Adam', 'lr': '0.001', 'early_stop_patience': '3', 'max_epoch': '30' } +``` + +While further sweeping on hyper-parameters for each set up will very likely be able to further improve the performance, those engineering efforts are irrelevant of the scope of our research focus on investigating different methods nor informative in drawing conclusions to answer the above questions. + +# F ADDITIONAL RESULTS AND DISCUSSION + +Figures reporting results on changing annotation qualities. Figure 7. + +Figures reporting results on changing annotation qualities under different annotation availability. Figure 8 and Figure 9. + +Figures reporting results on changing annotation availability under different annotation qualities. Figure 10 and Figure 11. + +Figures reporting cross-prompt annotation results on changing annotation availability and annotation quality. Figure 12 to Figure 15 + +Post-hoc explanation of the superiority of the classification-based model over the BT model + +(1). Analytically, the BT model can be used to estimate the winning rate of each player when matched with another. On the other hand, the classification-based reward model estimates the marginal probability of winning. As we showed in Section 3, both options achieve order consistency, therefore, they both satisfied the minimal requirement for inference time optimization, despite the classification-based score estimation having less information due to the marginalization. + +However, given the same dataset, a more informative objective may not be as easy to learn as compared to the less informative objective. + +In reward modeling settings, we will have to estimate/learn these logits from noisy (binary) annotations. The other side of the same coin (of having less information) in this marginalization/average is that it makes the marginal probability and classification-based reward score easier to learn from data. i.e., we are trading some information that might not be necessary for inference time optimization for a simpler target during the learning/reward modeling stage. We can see this by evaluating the variance of these two targets and one can see that the classification-based reward score has a lower variance, and therefore, an easier-to-learn objective. + +$$ +\begin{array}{l} V a r _ {i, j} (P (i \text {w i n s} j | i, j)) = V a r _ {i} (\mathbb {E} _ {j} (P (i \text {w i n s} j | i, j) | j)) + \mathbb {E} _ {i} (V a r _ {j} (P (i \text {w i n s} j | i, j) | j)) \\ = \operatorname {V a r} _ {i} (P (i \text {w i n s} | i)) + \mathbb {E} _ {i} (\operatorname {V a r} _ {j} (P (i \text {w i n s} j | i, j) | j)) \\ \end{array} +$$ + +The first term on RHS is the classification-based target's variance and the second term is non-negative. So the classification-based method targets an objective with less noise in it. + +(2) Empirically, such an insight can be observed in experiments. In Figure 7, we report the results in barplots on changing the annotation quality. - In the top 3 panels, the annotation noises are high (having around or above 30 percent error rate), and in those setups, we find the classification-based reward models in general outperform the BT reward models. - In the bottom 3 panels, the annotation noises are low (less than 15 percent error rate), and in those setups, we find the BT models achieve better or on-par performance as compared with the classification-based reward models. + +![](images/67016c45242ca5e733e69ac45079c38425bbd53db4d2a996aa60ea6ab18bde00.jpg) + +![](images/05416dd62f588ebee667746bd775101ef7eff736a597e4fc3737ff78b41c8ae3.jpg) + +![](images/bc3d94e39bf7a9fc5b70f95eb1a485cc46e820e07f70df95298dce8d2cfb41bf.jpg) + +![](images/a81679d6935fb7a6bd8b1c60bedf53d6b8c5fa594a90cfef36d0035d1277a9e8.jpg) + +![](images/8ef4616f912fc3ea0ac3b5b637b7b0e7344ab553e0d8bf86e857bc769b75f335.jpg) + +![](images/64f5cd887632f3e0e74d15a9757d53da03669c0f71ad3600ff1737074f6ff32c.jpg) + +![](images/5d3766e4d5d280a19e519a769cec09475e12130858d57a7d97ccc1f74a0241f6.jpg) + +![](images/22ee9dd5c47ef63b47cd60b5d7e238ef67d2790e755a8c6ba50f7808b5205add.jpg) + +![](images/008807b18b9ffa2be143b22c780ccf87539dbea9c3e5ea42274a88368fe62e6e.jpg) + +![](images/a95d77df27aee81ef9ac28fd0e993e864a4a1267547ab0505b37012e083f87a2.jpg) + +![](images/ed94cd5a7061807f50c246a6f66f14d1602f5b0787ad410f7031b7dbead87ae9.jpg) +Figure 7: Experiment results in barplots on changing the annotation quality. Error bars are given by 5 runs by changing seeds. + +![](images/2948918c2a0f49256a538385890ca26df68f194624db7fc20fc9df4d54596dfe.jpg) + +![](images/8899382282571aa5db34de196ab136ae883547710a33180c1aad69949ccb327a.jpg) + +![](images/2c6392ab0cd0fc7a2a7b4c67e8b1e8b47bea82d5f5fb6188eb537391dd51755b.jpg) +(a) 5000 annotations + +![](images/bbf72cb96b462cf5bb30c21991f5cbdbd75953dec71ae24467abac40be843ed0.jpg) +(b) 10000 annotations + +![](images/561e26588326e761897fa67e1961589148ea49ce199ea4fac763f9f4c934a306.jpg) +(c) 20000 annotations +(d) 40000 annotations +Figure 8: Harmless Dataset: additional results on changing annotation quality under different annotation availability. Error bars are given by 5 runs by changing seeds. + +![](images/046fb3930c1488a4819a7f3fa470f85ed733c57022677f895e1492c28d8d3a89.jpg) + +![](images/0dd9fc81647439cd16e170fef4fb105c5e10e9d4369f4d39b22b94257c91413f.jpg) + +![](images/23796dfe616b4789ad61f41e68057e9e09844d0be82b0363f2a06c02f5d8af27.jpg) +(a) 5000 annotations + +![](images/3bccee522ce457d68159218ef84a51c5cc8e3c3096ee774c159b4cbf992a0a40.jpg) + +![](images/d6552de103dcd99463a170577da75193fe05db45897469d72215f124e7469008.jpg) + +![](images/41e6b6d978c573599ec14e79c63fc78647ff3e5266710e1547a54af5f63aae26.jpg) + +![](images/0b92a8bb713873dac255beb80dc72a20abd3b7ecd6b916031181851317bb4201.jpg) + +![](images/ccf233f71b0911bc04bf5e2ec73999612c1e7ff9e73c2a1a6add318b0fd0f198.jpg) + +![](images/4397ebb6e3f31d0c8697b9397336cd7f8ae07480cf1977bf8a12183083af9d5a.jpg) +(b) 10000 annotations + +![](images/fb21232cc1289f7ea05905693f4a5cdd008e4e4303fd9a732111732de486718c.jpg) + +![](images/d3d6f6fbdb589f0dce9d2ec93cf056a4872fa39b5ffa8f32f682d4153fea16a0.jpg) + +![](images/8558a4733b50740f6eabad9776aadac1bbb35de57f7e3ea59472176f0ed42e96.jpg) + +![](images/82a35467a5966caa7c3a3ea1193c195ee71c4005442ee671bfc0d95516e855ed.jpg) + +![](images/80b922a642c43c9440f93a27245904a2189cc388e3674b650d4c1d2b006c142d.jpg) + +![](images/153cc9cd771b4ce0c1692bd257b300a9dcbe0f1bd466178dc3ade3dee5acdf62.jpg) +(c) 20000 annotations + +![](images/b73f126e86739b9606f54c2f239052513a51d1e7f1c92e40f3e525e835fb0834.jpg) + +![](images/e57013ff9960af213bfadbfd9c99ee3cd7c322937706b2533fda152e9bcd5d38.jpg) + +![](images/9b014887e7d4e9a0f11e5aeacd11a4f0381f90d932ebd51bc330f1ac128caeea.jpg) + +![](images/2a9f8e095210da9878ceaeb5ac20df8c16c8274bb4c5a15cde4f344550d85299.jpg) +Figure 9: Helpful Dataset: additional results on changing annotation quality under different annotation availability. Error bars are given by 5 runs by changing seeds. + +![](images/0b1f1fb21672833913bb82ee1b32f853a49a45c47ff3e2969c4fc743692feb7c.jpg) + +![](images/fd65da4b56f3dbf8ab9b5a4e097a250ce49c25517157389f4e2d841fe6ccee2a.jpg) +(d) 40000 annotations + +![](images/74b9b2b8c43d8b817433a390b232437371feb8c8659d0c6adfbeb82c2eb71073.jpg) + +![](images/a90fd002849e1180553bc9825d5bbea4ac6f61460ff76903371485c14e0b8741.jpg) + +![](images/a63b6991010723c4b0fc912f762847eb014b03317d5d58b7ee2a19dc58b6ba3d.jpg) + +![](images/cb7216fde97d39eb86fbacaba461591a6e8cf80b3aceca873e54faba1370315a.jpg) + +![](images/f69d2805cb7f6531813db94c730383186134158e578eebf60716f17bd7505da5.jpg) + +![](images/dbb22abcf331620b05eaec05f9ba997826ea384a8a2571172980ff57ec488c35.jpg) +(a) $\beta = 0.5$ + +![](images/aca20f21b62c79c742de54527cb64bcfef3c354c1b03e9c090e9cc39f369450b.jpg) + +![](images/effbc483b1537bc60214d122b4452cb2a83a281676abdda569c1f4b366f09d43.jpg) + +![](images/515f4ea0550ea7204b6df227c83e1c49aef0c85564996bdcb66b043ec04454c9.jpg) + +![](images/f8c7ae4bcfdccb622dc949bae61d1fda43666789600e524ba4a2d7e04317940f.jpg) + +![](images/6b090eddb1b1ac7988667c5a0bb22e72acd942ee113b78ffa7e225315288493f.jpg) + +![](images/8443fd29b38a9d978d277a81228c3bc25dff97157ba9372293fd793f7739703b.jpg) +(b) $\beta = 0.7$ + +![](images/3d288309424eef521a97ea5f9c6cbacc2e73796efcbaa00cc3eb920d822e9606.jpg) + +![](images/55e9fa3186235d22e02ed30e5acbfb864567026c990f1cdde22893a6b3cbccab.jpg) + +![](images/23daa9eacd6b879fe89471c6316cf928e0eb3d460e5d482f670117fce750b061.jpg) + +![](images/08c2c6cc03c2c71e1b2dae7e09520c7d25ee0b121b9dcec4283a161102eb1baa.jpg) + +![](images/b9ce57bdb91d090f8ea392893837123f6acfcd5ed23cd81d91ed1717907fc44e.jpg) + +![](images/e9a4e6cf122c5c00314259ba3f1d5ed2d9aeea1cb04d717d6090ba714d897129.jpg) +(c) $\beta = 1.0$ + +![](images/c71aafb26919447f8caf16336e80c3e89c1909f320267c3d73f3ff2b4f1883cd.jpg) + +![](images/9a952dfe646faf8b1929bb0901723ca69747d608cac23a7dccfc1af3c432f802.jpg) + +![](images/586c81a1c0c3f0ecfb240da565d894a1bb5a30ede301a002e1bcfabf088bb3c2.jpg) + +![](images/770e1f0d9e291b5cb2f2dbf55995cef73a6bc683eaee478056d2fc99f81fbf29.jpg) + +![](images/85f00e0637fe8f1552afb2d00df378481c9ff7ab6e2ffd58c4919a9d6853f559.jpg) + +![](images/e6ad1688094ffa4c08b0c9167732246be11419192d9a8e64360ab254cbf085df.jpg) +(d) $\beta = 3.0$ + +![](images/164abfec6dec139514470849b75fb12f5fbd4807d4f004437c8762df65620cf4.jpg) + +![](images/641a3773417d311930178c6a677b287569e097b70cf707ecbde73cf00e3bbef4.jpg) + +![](images/1984442a18eddb8343db12d925ca9e95f950a7dd8f6c5a6af12ddd93f3be2a9e.jpg) + +![](images/20b07ef984afe9dc3ad1972b9dd76fdcbf9767b542096042d0b51d086d8d91a7.jpg) + +![](images/d70aa960079c80c8aee53f6bd61fbc73e46f2d9a18ad7e52fb666593dec7734e.jpg) + +![](images/ac3a0148bc8c0e51721bf1b28d846239026f7e55512506cc663dfa53d00ad985.jpg) + +![](images/95749a37bcc71bab9c781fcbca736746a9af1d96ac45228502be1e4bad8a9f8a.jpg) + +![](images/e532df7f7008e1b83b90ab64b2e97e03f8afaa277fd419aa28fd7ed52def93b8.jpg) + +![](images/8b4c20c0de294188bbc271e8b6c94ca75f6d89cb41751a36fc9fffe998628fd6.jpg) + +![](images/cf6591e2ff51aa0230d04870870b2bc36dc9b58751f4fff975c62e0f267cb7a5.jpg) +Figure 10: Harmless Dataset: additional results on changing annotation availability under different annotation quality. Error bars are given by 5 runs by changing seeds. + +![](images/7829daacd56a92cf8cad432d8bc8f27ac084fc5bc0f8e81fcce771ca76f4d4a5.jpg) + +![](images/3006b1a41a5dfee6b388e4238b400ff477206bccc1776f5a79f68e6bfd85807f.jpg) +(e) $\beta = 5.0$ + +![](images/0bbf7ca4f18f0cdbf3e45c60e93d953eae601b370fbc5c1f29ffcd8835238eb9.jpg) +(f) $\beta = 10.0$ + +![](images/f05768adbde0a060e9963254a1e929af40ed134fe43aa0ff997edf1bc0356621.jpg) + +![](images/7b5599a04a7a23ffc7ad9b1d06807f3e80435d71e233c24c2e7b67b237c2fa97.jpg) + +![](images/7479e8c95a04d7fb44fad129fbc4c2c376e011a64ab396389f561b5ca11ba620.jpg) + +![](images/d326a52038d02520de44d32d105105b0ebe8f9bb3a4f31cc619b03730cb4682d.jpg) + +![](images/8baff5a6928b2caada2ae82ae390eb7db16ab6c3222f44256bd7ec93e07ebbc0.jpg) +(a) $\beta = 0.5$ + +![](images/2a306fccb351c12b6eaa09cb7ebf5ee5518c350241dc6a1a65defed6d5ac2a77.jpg) + +![](images/edf0fb1f97bbc2584d459b8806c24dd0c46f3b9aa012baceb501672ff27631c5.jpg) + +![](images/c06e8db14b0ec6716aefa867489daeb01ae88fd45c5d15b365df5b905b581732.jpg) + +![](images/8cd6052dcd72cf1562e75c10e3554a332ba72e96e2716b9a57c703a81deef828.jpg) + +![](images/030abbeda6e5ec837f0500823206108cdc2e5265cdc869a4f8a954d5960269ec.jpg) + +![](images/2733fb6d926f4c5ddf329a8b59d597607bb08147f24ddd8d28a4c08be650fdab.jpg) +(b) $\beta = 0.7$ + +![](images/9be1774b3cf1d316fb6ac15343e7a5e1768f4a5386e7acf3f1456825f9bf99fd.jpg) + +![](images/f7b224d5ad585e824bf327bd9bbb235d54aff5442664b259f081922facf053db.jpg) + +![](images/b14db07fe2444e8e7996c8a3c167113ac3598114a6ebd060577e2fe9244418ce.jpg) + +![](images/10435e11bba6ca61991fc285fddc6189e84fac3daa18d07f0879e6667fbcd8bf.jpg) + +![](images/2665ee50ac914f66ad6beca94bef197ed723170c756eef18bc73ec08e3c26271.jpg) + +![](images/43de515d7d966ba601ce87495289c04dba95b85b207c3fbb1c432e0fc2a5867f.jpg) +(c) $\beta = 1.0$ + +![](images/8bae8e3b5417bdea3ae89571e54516d753c5aea5d93ff3d913a1c087c3e2ff05.jpg) + +![](images/65c95a40dcd0eb9fe2ae4e2147e4f37db7e3ecf0349569de06466980381187d9.jpg) + +![](images/376d803400f2097c58f5ed1fd5fa72486e5fa6f4f0830fe789bd2e2843677bb3.jpg) + +![](images/9540be6abfb82ffa0753ea28e394bf95d98d024fb3104a9daa5622e9e66e8770.jpg) + +![](images/ddfe568829e1e8a7c2a9a3eb50ea396da737120043dd8c07912332022efedc06.jpg) + +![](images/74247862bec37f1b66fa701cd9239532f00c201f4e55176b2cc2fad5b351b348.jpg) +(d) $\beta = 3.0$ + +![](images/6404f6db82da7337c6041b3946517272f84f0ab27f9e8fcdcab51709d6003c7c.jpg) + +![](images/96607752e892ccee1eb109dffa81bd70205e6dac117b9d4be9853302bf1ceaf1.jpg) + +![](images/e20bb9cbfe414ec5581d322ee254f0d5713912c7ecc2cf9eaa330af829d82cd6.jpg) + +![](images/6f48de99870442a26c97406ea5316406a6cc2ff81d157ab5c780f9b5b5cd3516.jpg) + +![](images/852eba043235c205f797d206c09d477adc85f8d1360fcf2a5a25d646a23315e3.jpg) + +![](images/18d7e5eb127a855a6bfa95252602d3a443857ec9d5dd7345fdaf70e0d218fbd3.jpg) + +![](images/b4da1ecb822363aed11d1c126a50aea7cb33bbd8ebdb4d0f2e73a3cc2a885705.jpg) + +![](images/ee30b5ddea550ae9989da4b25a27616d932710c4911b2baaa20197f98128d9f4.jpg) + +![](images/ddd8b5c827cdad20b1cc06bebb75883f6103ddb7eb8f4bbf2e7beecf2b0c39e5.jpg) + +![](images/88942b78a01601929b14377bf556fdfe2fc55a4f7c1613bac478cac573aabef4.jpg) +Figure 11: Helpful Dataset: additional results on changing annotation availability under different annotation quality. Error bars are given by 5 runs by changing seeds. + +![](images/b2d0be0466214a7b47b572120e44c3c4df79c9641b5564efc44607fef88c3de2.jpg) + +![](images/dd45a2f56da35e94dc2aaa4d8bcafbf4bc81e5af469e4670305223980b0a4216.jpg) +(e) $\beta = 5.0$ + +![](images/aa891355e1c81984fc3a9d45cb3388a5deeb2ded74dd5c331d60823b3d217cff.jpg) +(f) $\beta = 10.0$ + +![](images/8a1d747cf337ad7127a7a1742e99922225db60b27f5e39588b20637e03074dd7.jpg) + +![](images/39027e7a93126cad9ae72c15ebcf33225afd5757b9a84ec3db0e4f3c2a0d0305.jpg) + +![](images/88f437d35a9b29a4fb7721d3dd8f191d9c4a09c5c1267ebda443a3c98a9bfe66.jpg) + +![](images/288956a84d4ad5ad187c6b8f74ee74f7358e4eac419af7419d5900a920827725.jpg) + +![](images/5c01faa94072dc4bf55908d3b7fa2016b8622c718ad81cd53a541e6ef026e023.jpg) + +![](images/89fbe0b30cac12d1da5441450ca5b0be153fd2f0c62a9ee7518c434bccb5d1d5.jpg) + +![](images/4f4434cc7cf0c6e74e52f71fb84fc23b054033b1110ca7a3167bf9b434d40f25.jpg) +Figure 12: Results on cross-prompt comparisons, with 5000 annotations, $\beta = 1$ . Error bars are given by 5 runs by changing seeds. + +![](images/ecbcb2b903e457e2f5bff742f025013ce3db390719504bae34cbcf37acc8b18a.jpg) + +![](images/16bd29a74be2d499574b0139defabafa045f57f23e09a55658e04ad13787cf1e.jpg) + +![](images/39fd1bbe012bda34f386af01d2fd4c3cde882a3114619287492e468cb796db85.jpg) + +![](images/164b81f3e30e2ab0d903a7c718db481321884ef31bbe90dd0c135e0bb4262def.jpg) + +![](images/10c55dc708f596e67751222dda8074ac419a713c12b47b8912b03fb18a026b85.jpg) + +![](images/0c11442fa6a335235ad04ac17983164215562aca1cc61cefd50140d2387ff6ea.jpg) +Figure 13: Results on cross-prompt comparisons, with 5000 annotations, $\beta = 10$ . Error bars are given by 5 runs by changing seeds. + +![](images/ee8c52d8ef4e2a85b3b400b8ba20d22f144d5020c0341c2368012524d16da587.jpg) + +![](images/dbe3234ad78fc6c33878976be0845aaf07b5a8eaee0c523510e63f00d68a76c3.jpg) + +![](images/6bb7f48316820d81d2a19bc729bf717132cab4013df440cc22672dbac41aadae.jpg) + +![](images/b594ce07cc4a2a4480f1f2f960ed1d5178b3b5ea96b8df5d9cdbe9338c0b08b0.jpg) + +![](images/6954c5ca31ffe676f54f3e383035208552318637ae6e168292c78cd664ff3609.jpg) + +![](images/caf9b25a76255a7a393a355fe23f516ba617d1c97a1511768ec16fe318c2eed2.jpg) +Figure 14: Results on cross-prompt comparisons, with 40000 annotations, $\beta = 1$ . Error bars are given by 5 runs by changing seeds. + +![](images/72076cd670f5425994d35b13c7bc54a97e720a976672ba4cbcd003405856ce33.jpg) + +![](images/5f2581ba1b872254773c935a7efe78ff1acbd7bfea8f2ca67d8d453e4f2e0fa3.jpg) + +![](images/0553ceadf8b7a1225216cb81954f40d79ebb8beabcf88a8e7faa05be050898af.jpg) + +![](images/1aa5929358e4428f52520e68ad11b28e9aa8dae3feed7e27be3d22732b4a37f1.jpg) + +![](images/e589fc6cc126a4b90ee89a47e8bbede1c0ba0f33181d9978a3986b6b02cb3456.jpg) + +![](images/33bd841a4dec2243725adf608bca5b4577271bef4683fec77ee94b2b47bdcc25.jpg) +Figure 15: Results on cross-prompt comparisons, with 40000 annotations, $\beta = 10$ . Error bars are given by 5 runs by changing seeds. + +![](images/8beea49620fb8f9b014165cd2a4c5f224cbab3a2a889a6632d2b633609fa6340.jpg) + +# G THEORETICAL JUSTIFICATIONS FOR CROSS-PROMPT COMPARISONS + +In both BT and classification-based reward modeling, there is no theoretical requirement to limit comparisons to the same prompts. For classification models, this is straightforward, as they do not rely on paired data at all. Similarly, in traditional BT applications, random pairwise comparisons among players are common. This further motivates our investigation into how randomized comparisons across different prompts affect reward modeling performance. + +To further motivate the usage of cross-prompt comparison, we introduce the following notation on annotation quality and analysis as a case study under a Gaussian assumption on score distributions. In equation 8, we consider a special case of $\xi$ in equation 8 to be $\xi (\cdot) = \sigma (\beta \cdot)$ , that the annotators' ability is characterized by $\beta$ . When $\beta = 0$ , we have random annotations: + +$$ +\mathbb {P} \left(h \left(x _ {1}, x _ {2}, y _ {1}, y _ {2}\right)\left(r \left(x _ {1}, y _ {1}\right) - r \left(x _ {2}, y _ {2}\right)\right) > 0 \mid \Delta r\right) = \sigma (\beta \Delta r) \xrightarrow {\beta \rightarrow 0} 0. 5, \tag {46} +$$ + +and when $\beta \to \infty$ , we have perfect annotations: + +$$ +\mathbb {P} \left(h \left(x _ {1}, x _ {2}, y _ {1}, y _ {2}\right)\left(r \left(x _ {1}, y _ {1}\right) - r \left(x _ {2}, y _ {2}\right)\right) > 0 \mid \Delta r\right) = \sigma (\beta \Delta r) \xrightarrow {\beta \rightarrow \infty} 1. \tag {47} +$$ + +In a nutshell, the annotator's abilities and the differences among prompt-response pairs together determine how much preference is correctly labeled in the annotation. In the following, we show a special case when two responses of a prompt $x$ are randomly sampled from a single LLM $\ell$ . + +Example 1 (Annotation Quality under Gaussian Score). When data for pairwise annotation is generated through random sampling of two responses $y_{1}, y_{2} \sim \ell(x)$ , we further assume the utility of those two responses are sampled from a Gaussian distribution with variance $\sigma_{x}^{2}$ , i.e., $y \sim \ell(x), r(x, y) \sim \tilde{\mathcal{N}}(\mu_{x}, \sigma_{x}^{2})$ . Then the annotation quality $\mathcal{Q}_{\mathrm{pair}}(x)$ on such a prompt can be defined as the averaged annotation order consistency: + +$$ +\mathcal {Q} _ {\text {p a i r}} (x) = \mathbb {E} _ {y _ {1}, y _ {2} | x} [ \tau_ {x} ] = \mathbb {E} _ {y _ {1}, y _ {2} | x} \left[ \sigma \left(\beta \left| r (x, y _ {1}) - r (x, y _ {2}) \right|\right) \right] \tag {48} +$$ + +where $\tau_{x} = \sigma (\beta |r(x,y_{1}) - r(x,y_{2})|)$ is a random variable (over $y_{1},y_{2}$ ) and the probability density function of $\tau_{x}$ is + +$$ +f _ {\tau_ {x} | x} (t) = \frac {1}{\sqrt {\pi \beta^ {2} \sigma_ {x} ^ {2}}} \exp \left(- \frac {\left(\log \left(\frac {t}{1 - t}\right)\right) ^ {2}}{4 \beta^ {2} \sigma_ {x} ^ {2}}\right) \cdot \frac {1}{t (1 - t)} \tag {49} +$$ + +Proof. To get the PDF of $f(\tau)$ , denote $\tau = \sigma(|\rho|)$ , where we use $\rho(x) = \beta (r^*(y_1|x) - r^*(y_2|x))$ , we have $\rho(x) \sim \mathcal{N}(0, 2\beta^2 \sigma_x^2)$ follows the normal distribution, and $|\rho(x)|$ follows a folded normal distribution, its cumulative distribution function, mean and variance are given by + +$$ +F _ {| \rho |} (x; \mu = 0, \sigma^ {2} = 2 \beta^ {2} \sigma_ {x} ^ {2}) = \operatorname {e r f} \left(\frac {x}{2 \beta \sigma_ {x}}\right), \mu_ {| \rho |} = \frac {2 \beta \sigma_ {x}}{\sqrt {\pi}}, \sigma_ {| \rho |} ^ {2} = 2 \beta^ {2} \sigma_ {x} ^ {2} (1 - \frac {2}{\pi}), \tag {50} +$$ + +respectively. + +To find the PDF of $\tau = \sigma(|\rho|)$ , we use the change of variables formula. If $Y = g(X)$ and $g$ is a monotonic function, the PDF of $Y$ , $f_{Y}(y)$ , can be obtained by: + +$$ +f _ {Y} (y) = f _ {X} \left(g ^ {- 1} (y)\right) \left| \frac {d}{d y} g ^ {- 1} (y) \right|. \tag {51} +$$ + +For the sigmoid function, the inverse is given by: + +$$ +g ^ {- 1} (y) = \log \left(\frac {y}{1 - y}\right). \tag {52} +$$ + +The derivative of the inverse sigmoid function is: + +$$ +\frac {d}{d y} g ^ {- 1} (y) = \frac {1}{y (1 - y)}. \tag {53} +$$ + +Plugging these into the change of variables formula, we get: + +$$ +f _ {\tau} (t) = \frac {1}{\sqrt {\pi \beta^ {2} \sigma_ {x} ^ {2}}} \exp \left(- \frac {\left(\log \left(\frac {t}{1 - t}\right)\right) ^ {2}}{4 \beta^ {2} \sigma_ {x} ^ {2}}\right) \cdot \frac {1}{t (1 - t)}. \tag {54} +$$ + +![](images/a7df88336fceaf3a0184272736631f3164699e460eef75a461e6b1dc17b7f9ea.jpg) + +In this special case, it is easy to get some numerical results: when $\beta^2\sigma_x^2 = 1$ , we have $\mathcal{Q}_{\mathrm{pair}} \approx 0.6749$ , e.g., roughly $67.5\%$ of data are correctly labelled by annotators. Similarly, when $\beta^2\sigma_x^2 = 2$ , $\mathcal{Q}_{\mathrm{pair}} \approx 0.7251$ ; when $\beta^2\sigma_x^2 = 4$ , $\mathcal{Q}_{\mathrm{pair}} \approx 0.7781$ ; when $\beta^2\sigma_x^2 = 10$ , $\mathcal{Q}_{\mathrm{pair}} \approx 0.8428$ . This suggests that the effect of better annotators and the effect of response utility diversity are always coupled: in order to improve data quality, we may either improve the ability of annotators or further diversify the generation utilities — as both of those parameters control the annotation quality. Next, we show that cross-prompt comparison in this example can be an effective practice to increase response utility diversity. And show that cross-prompt annotation can improve annotation quality. + +Cross-Prompt Annotation Improves Quality under Gaussian Score When considering multiple prompts $x_{i}, i = 1,2,\ldots ,N$ , we denote the corresponding responses as $y_{i}$ , and scores $r(x_{i},y_{i})\sim \mathcal{N}(\mu_i,\sigma_i^2)$ . In the following, we show that cross-prompt annotation can improve annotation quality. + +Proposition G.1 (Cross-Prompt Comparisons Increase Utility Diversity). When + +data for pairwise annotation is generated through random sampling of two responses $y_{1}, y_{2} \sim \ell(x)$ , and the utility of those two responses are sampled from a Gaussian distribution with variance $\sigma_{x}^{2}$ , i.e., $y \sim \ell(x)$ , $r_{x,y} \sim \mathcal{N}(\mu_{x}, \sigma_{x}^{2})$ , when there are multiple prompts $x$ , we have + +$$ +\mathbb {E} _ {x} \mathbb {E} _ {y _ {1}, y _ {2} | x} \left[ \left| r _ {x, y _ {1}} - r _ {x, y _ {2}} \right| \right] \leq \mathbb {E} _ {x _ {1}, x _ {2}} \mathbb {E} _ {y _ {1} | x _ {1}, y _ {2} | x _ {2}} \left[ \left| r _ {x _ {1}, y _ {1}} - r _ {x _ {2}, y _ {2}} \right| \right] \tag {55} +$$ + +Proof. Let $x_{ik} \sim \mathcal{N}(\mu_k, \sigma_k^2)$ , and $x_{jl} \sim \mathcal{N}(\mu_l, \sigma_l^2)$ , with $k \neq l$ , then + +$$ +x _ {i k} - x _ {j l} \sim \mathcal {N} \left(\mu_ {k} - \mu_ {l}, \sigma_ {k} ^ {2} + \sigma_ {l} ^ {2}\right) \tag {56} +$$ + +The expectation of $|x_{ik} - x_{jl}|$ is given by + +$$ +\mathbb {E} \left[ | x _ {i k} - x _ {j l} | \right] = \sqrt {\sigma_ {k} ^ {2} + \sigma_ {l} ^ {2}} \sqrt {\frac {2}{\pi}} \exp \left(- \frac {\left(\mu_ {k} - \mu_ {l}\right) ^ {2}}{2 \left(\sigma_ {k} ^ {2} + \sigma_ {l} ^ {2}\right)}\right) + | \mu_ {k} - \mu_ {l} | \operatorname {e r f} \left(\frac {| \mu_ {k} - \mu_ {l} |}{\sigma_ {k} ^ {2} + \sigma_ {l} ^ {2}}\right) \tag {57} +$$ + +as its special case, let $x_{jk}\sim \mathcal{N}(\mu_k,\sigma_k^2)$ , then + +$$ +x _ {i k} - x _ {j k} \sim \mathcal {N} \left(0, 2 \sigma_ {k} ^ {2}\right) \tag {58} +$$ + +$$ +\mathbb {E} \left[ \left| x _ {i k} - x _ {j k} \right| \right] = 2 \sigma_ {k} \sqrt {\frac {1}{\pi}} \tag {59} +$$ + +we consider the special case of $\mu_{k} = \mu_{l}$ + +$$ +\frac {\mathbb {E} \left[ \left| x _ {i k} - x _ {j l} \right| \right]}{\mathbb {E} \left[ \left| x _ {i k} - x _ {j k} \right| \right]} \geq 1, \tag {60} +$$ + +the equality holds only if $\sigma_k^2 = \sigma_l^2$ . This is because $\frac{2(1 + t)^2}{(1 + t)^2}$ reaches its only minimum when $t = 1$ and we can let $t = \frac{\sigma_k}{\sigma_l}$ . Since $\sqrt{\frac{2}{\pi}}\exp (-x^2) + |x|\operatorname {erf}(|x|)$ is a monotonically increasing function at $x\geq 0$ , equation 60 also holds for $\mu_{k}\neq \mu_{l}$ . + +This gives us an intuitive result that, in expectation, the reward differences between cross-prompt comparisons are larger than the reward differences between prompt-response pairs sharing the same prompt. + +More generally, cross-prompt comparisons improve data quality when the utility distribution of different randomly sampled responses given a single prompt is unimodal and symmetric (e.g., Gaussian). + +Theorem G.2 (Cross-Prompt Annotation Improves Annotation Quality). When data for pairwise annotation is generated through random sampling of two responses $y_{1}, y_{2} \sim \ell(x)$ , and the utility of those two responses are sampled from a location-scale family with probability density function $g_{x}(x) = f((x - \mu_{x}) / \sigma_{x})$ for $f$ being unimodal and symmetric to 0. For any $\xi : \mathbb{R}_{+} \to [1/2,1]$ , first order differentiable, monotone increasing and concave, we have + +$$ +\begin{array}{l} \mathbb {E} _ {x} [ \mathcal {Q} _ {\mathrm {p a i r}} (x) ] = \mathbb {E} _ {x} \mathbb {E} _ {y _ {1}, y _ {2} | x} \left[ \xi \big (| r _ {x, y _ {1}} - r _ {x, y _ {2}} | \big) \right] \\ \leq \mathbb {E} _ {x _ {1}, x _ {2}} \mathbb {E} _ {y _ {1} | x _ {1}, y _ {2} | x _ {2}} [ \xi (| r _ {x _ {1}, y _ {1}} - r _ {x _ {2}, y _ {2}} |) ] := \mathbb {E} _ {x _ {1}, x _ {2}} \left[ \mathcal {Q} _ {\text {c r o s s - p r o m p t}} \left(x _ {1}, x _ {2}\right) \right]. \tag {61} \\ \end{array} +$$ + +In the above equation, $\mathcal{Q}_{\mathrm{cross - prompt}}(x_1,x_2)$ is defined slightly different from $\mathcal{Q}_{\mathrm{pair}}(x)$ which only takes one prompt as its input. We can understand $\mathcal{Q}_{\mathrm{pair}}(x)$ as a special case of $\mathcal{Q}_{\mathrm{cross - prompt}}(x_1,x_2)$ when $x_{1} = x_{2} = x$ . Theorem G.2 highlights that cross-prompt comparisons improve annotation quality for a broad class of utility distributions and $\xi$ + +Proof. Theorem G.2 follows directly the combination of Lemma G.3 - Lemma G.5. $\square$ + +![](images/dbc70bfda4f0013494f4bc30f7f142cb12778b5bcf1c268cdd4befe16383dd12.jpg) + +Section Summarization In this section, we highlighted the theoretical superiority of using cross-prompt comparisons in preference annotations. In expectation, cross-prompt comparisons can improve annotation quality since they increase the expected differences between prompt-response pairs. + +Lemma G.3. Suppose $\xi : \mathbb{R}_+ \to [1/2, 1]$ , first order differentiable, monotone increasing and $z \sim f$ with $f$ being density symmetric to $0$ and unimodal. We have for all $\mu$ + +$$ +\mathbb {E} (\xi (| z + \mu |)) \geq \mathbb {E} (\xi (| z |)) \tag {62} +$$ + +Proof. Without loss of generality, we assume $\mu \geq 0$ . Suppose the results hold for $\mu \geq 0$ , it has to hold for $\mu \leq 0$ . To see that, we observe that $-z \sim P$ due to symmetry, and apply the result for positive $\mu$ so that $\mathbb{E}(\xi(|z + \mu|)) = \mathbb{E}(\xi(|-z - \mu|)) \geq \mathbb{E}(\xi(|-z|)) = \mathbb{E}(\xi(|z|))$ . + +It thus suffices to show the result for nonnegative $\mu$ . To do so, we prove that this expectation, as a function of $\mu$ , is monotone increasing by taking the derivative. + +$$ +\begin{array}{l} \frac {d}{d \mu} \mathbb {E} (\xi (| z + \mu |)) = \mathbb {E} \left[ \frac {\partial}{\partial \mu} \xi (| z + \mu |) \right] \\ = \mathbb {E} \left[ \frac {d}{d | z + \mu |} \xi (| z + \mu |) \operatorname {s i g n} (z + \mu) \right] \\ = \int_ {- \infty} ^ {\infty} \frac {d}{d | z + \mu |} \xi (| z + \mu |) \mathrm {s i g n} (z + \mu) f (z) d z \\ = \int_ {- \mu} ^ {\infty} \frac {d}{d | z + \mu |} \xi (| z + \mu |) f (z) d z - \int_ {- \infty} ^ {- \mu} \frac {d}{d | z + \mu |} \xi (| z + \mu |) f (z) d z \\ = \int_ {0} ^ {\infty} \frac {d}{d | z |} \xi (| z |) f (z - \mu) d z - \int_ {- \infty} ^ {0} \frac {d}{d | z |} \xi (| z |) f (z - \mu) d z \\ = \int_ {0} ^ {\infty} \frac {d}{d | z |} \xi (| z |) f (z - \mu) d z - \int_ {0} ^ {\infty} \frac {d}{d | z |} \xi (| z |) f (z + \mu) d z \\ \geq 0 \\ \end{array} +$$ + +The last line due to unimodality and we must have for all $z\in [0,\infty)$ $f(z - \mu)\geq f(z + \mu)$ for $\mu \geq 0$ while $\frac{d}{d|z|}\xi (|z|)$ symmetric to 0 and bounded. + +Lemma G.4. Suppose $x_{1}, x_{2}$ iid from a unimodal symmetric location scale family density $g_{x}$ , i.e., the density of $x_{1}, x_{2}$ can be written as $g_{x}(x) = f((x - \mu_{x}) / \sigma_{x})$ for $f$ being unimodal and symmetric to 0. Further, suppose $y_{1}, y_{2} \sim g_{y}$ iid with density $g_{y}(y) = f((y - \mu_{y}) / \sigma_{y})$ for the same $f$ , we have for a function $\xi : \mathbb{R}_{+} \to [1/2,1]$ , first order differentiable, monotone increasing and concave, that + +$$ +\frac {1}{2} \mathbb {E} (\xi (| x _ {1} - x _ {2} |)) + \frac {1}{2} \mathbb {E} (\xi (| y _ {1} - y _ {2} |)) \leq \mathbb {E} (\xi (| x _ {1} - y _ {1} |)) \tag {63} +$$ + +Proof. By assumption $y_{1}, y_{2}, x_{1}, x_{2}$ 's are from the same location-scale family. There exists a $z$ whose density is $f$ such that $y_{1}$ has the same distribution as $\sigma_y z + \mu_y$ and $y_{1} - y_{2}$ having the same distribution as $\sqrt{2} \sigma_y z$ , $x_{1} - x_{2}$ having the same distribution as $\sqrt{2} \sigma_x z$ and $x_{1} - y_{1}$ having the same distribution with scale $\sqrt{\sigma_x^2 + \sigma_y^2}$ and location $\mu_x - \mu_y$ . We first find an upper bound of the left-hand side using Jensen's inequality as $\xi$ concave + +$$ +\begin{array}{l} \frac {1}{2} \mathbb {E} (\xi (| x _ {1} - x _ {2} |)) + \frac {1}{2} \mathbb {E} (\xi (| y _ {1} - y _ {2} |)) = \frac {1}{2} \mathbb {E} (\xi (\sqrt {2} \sigma_ {x} | z |)) + \frac {1}{2} \mathbb {E} (\xi (\sqrt {2} \sigma_ {y} | z |)) (64) \\ = \mathbb {E} \left(\frac {1}{2} \xi (\sqrt {2} \sigma_ {x} | z |\right) + \frac {1}{2} \xi (\sqrt {2} \sigma_ {y} | z |\big)) (65) \\ \leq \mathbb {E} \left(\xi \left(\frac {1}{2} \sqrt {2} \sigma_ {x} | z | + \frac {1}{2} \sqrt {2} \sigma_ {y} | z |\right) \right. (66) \\ = \mathbb {E} \left(\xi \left(\sqrt {\frac {\left(\sigma_ {x} + \sigma_ {y}\right) ^ {2}}{2}} | z |\right)\right) (67) \\ \end{array} +$$ + +The righthand side + +$$ +\begin{array}{l} \mathbb {E} \left(\xi \left(| x _ {1} - y _ {1} |\right)\right) = \mathbb {E} \left[ \xi \left(\left| \mu_ {x} - \mu_ {y} + \sqrt {\sigma_ {x} ^ {2} + \sigma_ {y} ^ {2}} z \right|\right) \right] (69) \\ \geq \mathbb {E} \left[ \xi \left(\sqrt {\sigma_ {x} ^ {2} + \sigma_ {y} ^ {2}} | z |\right) \right] (70) \\ \end{array} +$$ + +by Lemma G.3. We observe that for all $\sigma_x, \sigma_y$ that + +$$ +\sqrt {\frac {(\sigma_ {x} + \sigma_ {y}) ^ {2}}{2}} \leq \sqrt {\sigma_ {x} ^ {2} + \sigma_ {y} ^ {2}} \tag {71} +$$ + +Thus for all $|z|$ , we have $\xi\left(\sqrt{\sigma_x^2 + \sigma_y^2} |z|\right) \geq \xi\left(\sqrt{\frac{(\sigma_x + \sigma_y)^2}{2}} |z|\right)$ and in turn by taking expectation the result in the statement is true. + +Lemma G.5. We have that for random variables described above and suppose $\sigma_x, \sigma_y$ are iid, $\mu_x, \mu_y$ are iid + +$$ +\mathbb {E} _ {\sigma_ {x}, \mu_ {x}} \mathbb {E} _ {x \mid \sigma_ {x}, \mu_ {x}} (\xi (| x _ {1} - x _ {2} |)) \leq \mathbb {E} _ {\sigma_ {x}, \sigma_ {y}, \mu_ {x}, \mu_ {y}} \mathbb {E} _ {x, y \mid \sigma_ {x}, \sigma_ {y}, \mu_ {x}, \mu_ {y}} (\xi (| x _ {1} - y _ {1} |)) \tag {72} +$$ + +Proof. Since $\sigma_x, \sigma_y, \mu_x, \mu_y$ iid we can rewrite the left hand side as + +$$ +\begin{array}{l} \mathbb {E} _ {\sigma_ {x}, \mu_ {x}} \mathbb {E} _ {x \mid \sigma_ {x}, \mu_ {x}} (\xi (| x _ {1} - x _ {2} |)) = \frac {1}{2} \left[ \mathbb {E} _ {\sigma_ {x}, \mu_ {x}} \mathbb {E} _ {x \mid \sigma_ {x}, \mu_ {x}} (\xi (| x _ {1} - x _ {2} |)) + \mathbb {E} _ {\sigma_ {y}, \mu_ {y}} \mathbb {E} _ {y \mid \sigma_ {y}, \mu_ {y}} (\xi (| y _ {1} - y _ {2} |)) \right] (73) \\ = \mathbb {E} _ {\sigma_ {x}, \sigma_ {y}, \mu_ {x}, \mu_ {y}} \left[ \frac {1}{2} \mathbb {E} _ {x \mid \sigma_ {x}, \mu_ {x}} (\xi (| x _ {1} - x _ {2} |)) + \frac {1}{2} \mathbb {E} _ {y \mid \sigma_ {y}, \mu_ {y}} (\xi (| y _ {1} - y _ {2} |)) \right] (74) \\ \end{array} +$$ + +and the statement reduces to the two-pair case we showed before. + +# H DIRECT EVALUATION METRICS FOR REWARD MODELS + +Explicitly evaluating reward models' accuracies would be useful in understanding the performance of different reward models. Below, we present the results of two evaluation metrics: we first show the success rate of choosing a higher-quality response as compared with the population median score of responses (higher is better) in Figure 16. In Figure 17, we present the performance percentile of the reward model selected responses in the population (higher is better). We find different implementations of the classification-based reward models perform similarly and are in general better than using the BT models. And those discoveries are well aligned with the BoN evaluations. + +![](images/aee8b1c94ea52229ab2d49c5c03adbdb8c907aa655bbcbec6f65dd8ecb4c4f38.jpg) +Figure 16: Success rate of choosing a higher-scored response than the population median on the Harmless (left) and Helpful (right) datasets. The results are from 5 seeds. + +![](images/83f0c01852c20a5b138d27b6d12bf6973a41f613212d9c27ac62ebafb384403d.jpg) + +![](images/7721ec82e6505382294ec6b7666818ed7c6d474d11c7239fa2fbddc420ca1ad5.jpg) +Figure 17: BoN performance percentile on the Harmless (left) and Helpful (right) datasets. The results are from 5 seeds. + +![](images/5e8b21314f069851c66abfe06001ed35a950bf7541dcb95fab44d3a84e73bc11.jpg) \ No newline at end of file