Buckets:

|
download
raw
90.6 kB

Title: A Statistical Approach to Language Model Evaluations

URL Source: https://arxiv.org/html/2411.00640

Published Time: Mon, 04 Nov 2024 01:47:34 GMT

Markdown Content: \addbibresource

refs.bib

Adding Error Bars to Evals:

A Statistical Approach to Language Model Evaluations

Abstract

Evaluations are critical for understanding the capabilities of large language models (LLMs). Fundamentally, evaluations are experiments; but the literature on evaluations has largely ignored the literature from other sciences on experiment analysis and planning. This article shows researchers with some training in statistics how to think about and analyze data from language model evaluations. Conceptualizing evaluation questions as having been drawn from an unseen super-population, we present formulas for analyzing evaluation data, measuring differences between two models, and planning an evaluation experiment. We make a number of specific recommendations for running language model evaluations and reporting experiment results in a way that minimizes statistical noise and maximizes informativeness.

1 Introduction

Language models are measured in the literature by evaluations, or evals. Evals are commonly run and reported with a “highest number is best” mentality; industry practice is to highlight a state-of-the-art (SOTA) result in bold, but not necessarily to test that result for any kind of statistical significance.[madaan2024quantifyingvarianceevaluationbenchmarks] Chatbot Arena[chiang2024chatbot] has popularized the use of confidence intervals in its Elo scores, but error bars remain noticeably absent from traditional question-and-answer evals. One recent and notable exception is the technical report on the Llama 3 model family[dubey2024llama3herdmodels], which includes simple confidence intervals on a number of evals.

In this article, we seek to introduce rigorous statistical thinking into the world of language model evaluations, so that researchers may quantify the precision with which they are able to answer questions and test hypotheses using evals. After developing a comprehensive analytic framework, we make specific recommendations for the computation of confidence intervals and the reporting of eval results. Using this framework, we show that the confidence intervals recently reported in [dubey2024llama3herdmodels] are likely too narrow in some cases and too wide in other cases.

A short hypothetical example will motivate the overall discussion. Imagine that two competing models, code-named “Galleon” and “Dreadnought”, are being considered for deployment in a particular application (say, with a bent toward coding and mathematical reasoning tasks). As part of the decision-making process, three popular language model evaluations are performed on the two models: MATH, a mathematical reasoning eval[hendrycks2021math]; HumanEval, a Python coding eval[chen2021evaluatinglargelanguagemodels]; and MGSM, a multilingual eval covering grade-school math[shi2022languagemodelsmultilingualchainofthought]. The fictional results from this hypothetical comparison are presented in Table 1.

Eval \Model“Galleon”“Dreadnought”Difference MATH 65.5%63.0%+2.5%percent 2.5+2.5%+ 2.5 % HumanEval 83.6%87.7%−3.1%percent 3.1-3.1%- 3.1 % MGSM 75.3%78.0%−2.7%percent 2.7-2.7%- 2.7 %

Table 1: Hypothetical data from two fictional models across three (non-fictional) evals

On its face, the data table presents conflicting results: Galleon appears to outperform Dreadnought on MATH (65.5% vs 63.0%), but Dreadnought has performed better on HumanEval and MGSM by slightly wider margins. Is it safe to conclude from the results that Dreadnought is generally better suited for coding and mathematical tasks, given its margin of victory on two of three evals? Or should something in the data potentially give us pause?

“Evaluating the evaluations” is a complex undertaking fraught with both qualitative and quantitative considerations.[ganguli2023challenges] Remaining agnostic about the relative and qualitative merits of various evals, this article develops a framework for answering quantitative questions about specific eval results. With the aim of informing holistic decision-making, we offer a series of recommendations for running and reporting evals in a way that enables researchers to test well-formed hypotheses about competing models, competing hyperparameters, and competing prompts. Our specific recommendations to researchers include:

  1. 1.Computing standard errors of the mean using the Central Limit Theorem
  2. 2.When questions are drawn in related groups, computing clustered standard errors
  3. 3.Reducing variance by resampling answers and by analyzing next-token probabilities
  4. 4.When two models are being compared, conducting statistical inference on the question-level paired differences, rather than the population-level summary statistics
  5. 5.Using power analysis to determine whether an eval (or a random subsample) is capable of testing a hypothesis of interest

Drawing on statistical theory and the experimental design literature, we demonstrate that a small number of conceptual assumptions unlocks a rich theoretical landscape for researchers studying language model evaluations, and show practitioners how to conduct statistical inference on often-noisy eval data. For an overview of experiment design, we refer the reader to [Imbens_Rubin_2015].

2 Analysis framework

Suppose that the questions in an eval do not represent all possible questions, but instead were drawn at random from a (hypothetical, infinite, unseen) super-population of questions. This simple supposition lets us jump “through the looking glass” of the specific questions that appear in an eval in order to study the underlying skill that the eval is attempting to measure. We modify this assumption in Section 2.2 to study questions that may have been drawn together.

2.1 Independent questions

More formally, suppose an eval consists of n 𝑛 n italic_n independently drawn questions. We write the score on question i 𝑖 i italic_i as s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, decomposing the score into a mean component x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and a zero-mean random component ϵ i subscript italic-ϵ 𝑖\epsilon_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT:

s i=x i+ϵ i subscript 𝑠 𝑖 subscript 𝑥 𝑖 subscript italic-ϵ 𝑖 s_{i}=x_{i}+\epsilon_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

We refer to x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as the conditional mean and the variance of the random component ϵ i subscript italic-ϵ 𝑖\epsilon_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as the conditional variance, that is, the mean and variance conditional on the selection of question i 𝑖 i italic_i in the eval. Denote this latter quantity σ i 2=Var⁢(ϵ i)superscript subscript 𝜎 𝑖 2 Var subscript italic-ϵ 𝑖\sigma_{i}^{2}=\mathrm{Var}(\epsilon_{i})italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = roman_Var ( italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ).

We also write an unconditional version of these numbers (that is, unconditional on the selection of i 𝑖 i italic_i) as

s=x+ϵ 𝑠 𝑥 italic-ϵ s=x+\epsilon italic_s = italic_x + italic_ϵ

Let μ 𝜇\mu italic_μ represent the unobserved mean value of the super-population with μ=𝔼⁢[s]=𝔼⁢[x]𝜇 𝔼 delimited-[]𝑠 𝔼 delimited-[]𝑥\mu=\mathbb{E}[s]=\mathbb{E}[x]italic_μ = blackboard_E [ italic_s ] = blackboard_E [ italic_x ]. We wish to conduct inference on (that is, the “true” mean eval score) given only the observed question scores s 1,…,s n subscript 𝑠 1…subscript 𝑠 𝑛 s_{1},\ldots,s_{n}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. Let s¯=1 n⁢∑i s i¯𝑠 1 𝑛 subscript 𝑖 subscript 𝑠 𝑖\bar{s}=\frac{1}{n}\sum_{i}s_{i}over¯ start_ARG italic_s end_ARG = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT represent the average of the observed scores. It follows from the Law of Large Numbers that μ 𝜇\mu italic_μ may be estimated using μ^=s¯^𝜇¯𝑠\hat{\mu}=\bar{s}over^ start_ARG italic_μ end_ARG = over¯ start_ARG italic_s end_ARG, and from the Central Limit Theorem (C.L.T.) that the standard error of μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG can be estimated as

SE C.L.T.=Var⁢(s)/n=(1 n−1⁢∑i(s i−s¯)2)/n subscript SE formulae-sequence C L T Var 𝑠 𝑛 1 𝑛 1 subscript 𝑖 superscript subscript 𝑠 𝑖¯𝑠 2 𝑛\mathrm{SE}{\rm C.L.T.}=\sqrt{\mathrm{Var}(s)/n}=\sqrt{\left(\frac{1}{n-1}% \sum{i}(s_{i}-\bar{s})^{2}\right)/n}roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT = square-root start_ARG roman_Var ( italic_s ) / italic_n end_ARG = square-root start_ARG ( divide start_ARG 1 end_ARG start_ARG italic_n - 1 end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) / italic_n end_ARG(1)

In the special case where s i∈{0,1}subscript 𝑠 𝑖 0 1 s_{i}\in{0,1}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ { 0 , 1 } (a Bernoulli variable), Equation 1 becomes

SE Bernoulli=s¯⁢(1−s¯)/n subscript SE Bernoulli¯𝑠 1¯𝑠 𝑛\mathrm{SE}_{\rm Bernoulli}=\sqrt{\bar{s}(1-\bar{s})/n}roman_SE start_POSTSUBSCRIPT roman_Bernoulli end_POSTSUBSCRIPT = square-root start_ARG over¯ start_ARG italic_s end_ARG ( 1 - over¯ start_ARG italic_s end_ARG ) / italic_n end_ARG(2)

We note that it is common practice to compute the standard error of the mean by bootstrapping; see, for instance, the OpenAI evals[openai-evals] frameworks. But the Central Limit Theorem is applicable to any evals having scores with finite variance and a large number of questions, and so we regard bootstrapping as unnecessary unless a complicated sampling scheme or estimator is being used. We also note that [dubey2024llama3herdmodels] incorrectly estimates all of its standard errors using SE Bernoulli subscript SE Bernoulli\mathrm{SE}{\rm Bernoulli}roman_SE start_POSTSUBSCRIPT roman_Bernoulli end_POSTSUBSCRIPT, even when s i subscript 𝑠 𝑖 s{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT takes fractional values, such as with an F1 score. In these cases, SE Bernoulli subscript SE Bernoulli\mathrm{SE}{\rm Bernoulli}roman_SE start_POSTSUBSCRIPT roman_Bernoulli end_POSTSUBSCRIPT will tend to be conservative (too wide) compared to SE C.L.T.subscript SE formulae-sequence C L T\mathrm{SE}{\rm C.L.T.}roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT. The Inspect framework[uk-inspect] correctly computes SE C.L.T.subscript SE formulae-sequence C L T\mathrm{SE}_{\rm C.L.T.}roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT with its built-in stderr() metric.

We suggest reporting the standard error of the mean alongside (beneath) the mean when reporting eval scores. A common practice in other sciences is to report the standard error in parentheses; we suggest emulating this practice. See Table 2 for an example.

Questions“Galleon”“Dreadnought”

MATH 5,000 65.5%(0.7%)63.0%(0.7%) HumanEval 164 83.6%(3.2%)86.7%(3.0%) MGSM 2,500 75.3%(0.9%)78.0%(0.9%)

Table 2: We suggest two new reporting practices: including the number of questions in each eval, and the standard error of each estimate in parentheses (fictional models and numbers).

A 95% confidence interval may be computed from such a table as

CI 95%=s¯±1.96×SE C.L.T.subscript CI percent 95 plus-or-minus¯𝑠 1.96 subscript SE formulae-sequence C L T\mathrm{CI}{\ 95%}=\bar{s}\pm 1.96\times\mathrm{SE}{\rm C.L.T.}roman_CI start_POSTSUBSCRIPT 95 % end_POSTSUBSCRIPT = over¯ start_ARG italic_s end_ARG ± 1.96 × roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT(3)

2.2 Clustered questions

We next consider eval questions that are drawn in groups, or clusters. For instance, DROP[dua-etal-2019-drop], QuAC[choi2018quacquestionanswering], RACE[lai-etal-2017-race], and SQuAD[rajpurkar2018knowdontknowunanswerable] are reading-comprehension evals having multiple related questions about independently selected passages of text, and multilingual evals such as MGSM[shi2022languagemodelsmultilingualchainofthought] consist of the same question translated into many languages. Because the inclusion of questions is non-independent, a key assumption of the Central Limit Theorem (or a bootstrap) is violated, and so a naive application of Equation 1 will yield inconsistent standard errors. Here we show how to use clustered standard errors[clustered], a technique developed in the social sciences, to account for the dependence and correlation structure present in question clusters.

Let s i,c subscript 𝑠 𝑖 𝑐 s_{i,c}italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT denote the i 𝑖 i italic_i th question score within cluster c 𝑐 c italic_c, and assume that draws of clusters are independent. Continue to denote the mean observed score as s¯¯𝑠\bar{s}over¯ start_ARG italic_s end_ARG. The cluster-adjusted standard error of the mean score can be computed as

SE clustered=(SE C.L.T.2+1 n 2⁢∑c∑i∑j≠i(s i,c−s¯)⁢(s j,c−s¯))1/2 subscript SE clustered superscript superscript subscript SE formulae-sequence C L T 2 1 superscript 𝑛 2 subscript 𝑐 subscript 𝑖 subscript 𝑗 𝑖 subscript 𝑠 𝑖 𝑐¯𝑠 subscript 𝑠 𝑗 𝑐¯𝑠 1 2\mathrm{SE}{\rm clustered}=\left(\mathrm{SE}{\rm C.L.T.}^{2}+\frac{1}{n^{2}}% \sum_{c}\sum_{i}\sum_{j\neq i}(s_{i,c}-\bar{s})(s_{j,c}-\bar{s})\right)^{1/2}roman_SE start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT = ( roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide start_ARG 1 end_ARG start_ARG italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ( italic_s start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ) start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT(4)

The clustered standard error acts as a kind of “sliding scale” between cases where scores within a cluster are perfectly correlated (in which case each cluster acts as a single independent observation) and perfectly uncorrelated (in which case the clustered standard error is equivalent to the unclustered case). The intra-cluster correlations (or lack thereof) are captured by the triple summation (over clusters and cross-terms within clusters); for a derivation of Equation 4, see Appendix A.

Questions# Clusters“Galleon”“Dreadnought”

DROP 9,622 588 87.1(0.8)83.1(0.9) RACE-H 3,498 1,045 91.5%(0.5%)82.9%(0.7%) MGSM 2,500 250 75.3%(1.6%)78.0%(1.5%)

Table 3: We suggest including the cluster count alongside the question count when reporting cluster-adjusted standard errors (fictional models and numbers).

SE clustered subscript SE clustered\mathrm{SE}{\rm clustered}roman_SE start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT SE C.L.T.subscript SE formulae-sequence C L T\mathrm{SE}{\rm C.L.T.}roman_SE start_POSTSUBSCRIPT roman_C . roman_L . roman_T . end_POSTSUBSCRIPT Ratio DROP(1.34)(0.44)3.05 RACE-H(0.51%)(0.46%)1.10 MGSM(1.62%)(0.86%)1.88

Table 4: Clustered and naive standard errors computed on two popular evals using Anthropic models (non-fictional numbers). Analyzing the same data, clustered standard errors can be over 3X larger than naive standard errors.

A suggested format for reporting cluster-adjusted standard errors is presented in Table 3, and some real-world numbers are reported in Table 4. We note that the cluster adjustment in our real-world example is far from trivial (up to 3X). Failure to adjust standard errors for clustered sampling may lead an unsuspecting analyst to suppose that the measurement of the overall eval score is much more precise than it actually is. We therefore advise that the confidence intervals for reading-comprehension evals reported in [dubey2024llama3herdmodels] are likely anti-conservative (too narrow).

3 Variance reduction

The standard error of μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG quantifies the uncertainty associated with our estimate of the overall eval score. Reducing this quantity (which is the square root of the estimate variance) improves the precision of the estimate.

The variance associated with μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG may be decomposed into two components: the variance of the conditional mean, that is, the variance associated with choosing questions from the super-population, and the mean conditional variance, which is the mean variance of scores associated with the questions that were chosen. This decomposition is additive, and follows from the law of total variance. Mathematically,

Var⁢(μ^)=Var⁢(s)/n=(Var⁢(x)+𝔼⁢[σ i 2])/n Var^𝜇 Var 𝑠 𝑛 Var 𝑥 𝔼 delimited-[]superscript subscript 𝜎 𝑖 2 𝑛\mathrm{Var}(\hat{\mu})=\mathrm{Var}(s)/n=(\mathrm{Var}(x)+\mathbb{E}[\sigma_{% i}^{2}])/n roman_Var ( over^ start_ARG italic_μ end_ARG ) = roman_Var ( italic_s ) / italic_n = ( roman_Var ( italic_x ) + blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] ) / italic_n

This equation has a few implications: the simplest way to reduce the variance of μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG is to increase n 𝑛 n italic_n, the number of sampled questions. The variance of the conditional mean, Var⁢(x)Var 𝑥\mathrm{Var}(x)roman_Var ( italic_x ), is a property of the super-population and therefore immutable; but we have a couple of strategies available for reducing the overall variance via the expected conditional variance, 𝔼⁢[σ i 2]𝔼 delimited-[]superscript subscript 𝜎 𝑖 2\mathbb{E}[\sigma_{i}^{2}]blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ].

3.1 Resampling

The first strategy for reducing the expected conditional variance is to resample the model a number of times, and to compute the standard error using the question-level mean scores from the resamples. Suppose each question is sampled (answered) K 𝐾 K italic_K times, and the score s i subscript 𝑠 𝑖 s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the mean of these K 𝐾 K italic_K answer scores. Since the conditional variances are equal for all K 𝐾 K italic_K answer scores, the overall conditional variance becomes

Var⁢(s i)=σ i 2/K Var subscript 𝑠 𝑖 superscript subscript 𝜎 𝑖 2 𝐾\mathrm{Var}(s_{i})=\sigma_{i}^{2}/K roman_Var ( italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K

This relation should clarify the issue of how many times to resample a given question. Once 𝔼⁢[σ i 2]/K≪Var⁢(x)much-less-than 𝔼 delimited-[]superscript subscript 𝜎 𝑖 2 𝐾 Var 𝑥\mathbb{E}[\sigma_{i}^{2}]/K\ll\mathrm{Var}(x)blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] / italic_K ≪ roman_Var ( italic_x ), increasing K 𝐾 K italic_K further will have little effect on the standard error of μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG.

We work through an example to show how to compute a value for K 𝐾 K italic_K. Suppose scores are binary (0 or 1) and question difficulty is uniformly distributed, x∼U⁢[0,1]similar-to 𝑥 𝑈 0 1 x\sim U[0,1]italic_x ∼ italic_U [ 0 , 1 ]. Then ϵ i=1−x i subscript italic-ϵ 𝑖 1 subscript 𝑥 𝑖\epsilon_{i}=1-x_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 - italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with probability x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ϵ i=−x i subscript italic-ϵ 𝑖 subscript 𝑥 𝑖\epsilon_{i}=-x_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = - italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT otherwise. A bit of integration reveals that Var⁢(x)=1/12 Var 𝑥 1 12\mathrm{Var}(x)=1/12 roman_Var ( italic_x ) = 1 / 12 and 𝔼⁢[σ i 2]=1/6 𝔼 delimited-[]superscript subscript 𝜎 𝑖 2 1 6\mathbb{E}[\sigma_{i}^{2}]=1/6 blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] = 1 / 6. The required relation reduces to K≫2 much-greater-than 𝐾 2 K\gg 2 italic_K ≫ 2, or equivalently 2/K≪1 much-less-than 2 𝐾 1 2/K\ll 1 2 / italic_K ≪ 1. Writing the variance of this estimator with arbitrary K 𝐾 K italic_K in terms of the estimator with K=1 𝐾 1 K=1 italic_K = 1, we have

Var⁢(μ^⁢|K>⁢1)=Var⁢(μ^|K=1)×(1+2/K)/3 Var^𝜇 ket 𝐾 1 Var conditional^𝜇 𝐾 1 1 2 𝐾 3\mathrm{Var}(\hat{\mu}|K>1)=\mathrm{Var}(\hat{\mu}|K=1)\times(1+2/K)/3 roman_Var ( over^ start_ARG italic_μ end_ARG | italic_K > 1 ) = roman_Var ( over^ start_ARG italic_μ end_ARG | italic_K = 1 ) × ( 1 + 2 / italic_K ) / 3

Going from K=1 𝐾 1 K=1 italic_K = 1 (no resampling of answers) to K=2 𝐾 2 K=2 italic_K = 2, the total variance is reduced by 1/3. Increasing to K=4 𝐾 4 K=4 italic_K = 4, we have a variance reduction of 1/2, and setting K=6 𝐾 6 K=6 italic_K = 6, we reduce variance by 5/9. The upper limit on variance reduction via resampling in this example is 2/3.

Note that computing a pooled standard error across all K⁢N 𝐾 𝑁 KN italic_K italic_N answers will be inconsistent, as multiple answers to the same question would violate the assumption of independent draws. Refer to Section 2.2 for a discussion of questions drawn in related groups.

3.2 Next-token probabilities

The second strategy for reducing the expected conditional variance 𝔼⁢[σ i 2]𝔼 delimited-[]superscript subscript 𝜎 𝑖 2\mathbb{E}[\sigma_{i}^{2}]blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] is to eliminate the term altogether. For language model evals that do not utilize chain-of-thought reasoning, the conditional variance can be removed by analyzing next-token probabilities, rather than evaluating the model’s sampled (or resampled) output.

Consider for instance a multiple-choice eval, and a prompt that induces a model to produce its answer in its first token. If a correct answer is worth 1 and an incorrect answer is worth 0, and the probability of the correct token is denoted p i subscript 𝑝 𝑖 p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, then s i=x i=p i subscript 𝑠 𝑖 subscript 𝑥 𝑖 subscript 𝑝 𝑖 s_{i}=x_{i}=p_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ϵ i=0 subscript italic-ϵ 𝑖 0\epsilon_{i}=0 italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0. This yields Var⁢(μ^)=Var⁢(p)/n Var^𝜇 Var 𝑝 𝑛\mathrm{Var}(\hat{\mu})=\mathrm{Var}(p)/n roman_Var ( over^ start_ARG italic_μ end_ARG ) = roman_Var ( italic_p ) / italic_n. Using the uniform-difficulty example from the previous section, next-token probabilities will reduce the variance of the estimator by 2/3 (the upper limit achievable via resampling) compared to grading a single sample from each question.

3.3 Don’t touch the thermostat!

It may be tempting to reduce the “sampling temperature”[hinton2015distillingknowledgeneuralnetwork] of the model in order to reduce (or eliminate) the conditional variance. However, we advise against this practice, unless the purpose is to study the model at the new temperature. Besides altering the model’s behavior, adjusting the sampling temperature may simply shift the conditional variance (which can be mitigated using the two techniques above) into the variance of the conditional means (which cannot), or else reduce conditional variance by injecting bias into the estimator. Two short examples will illustrate these points.

Consider a single-token true/false eval where the conditional score means at T=1 𝑇 1 T=1 italic_T = 1 are uniformly distributed, x T=1∼U⁢[0,1]similar-to subscript 𝑥 𝑇 1 𝑈 0 1 x_{T=1}\sim U[0,1]italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ∼ italic_U [ 0 , 1 ]. As in Section 3.1, Var⁢(x T=1)=1/12 Var subscript 𝑥 𝑇 1 1 12\mathrm{Var}(x_{T=1})=1/12 roman_Var ( italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ) = 1 / 12. But at T=0 𝑇 0 T=0 italic_T = 0, x T=0=1⁢{x T=1>0.5}subscript 𝑥 𝑇 0 1 subscript 𝑥 𝑇 1 0.5 x_{T=0}=1{x_{T=1}>0.5}italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT = 1 { italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT > 0.5 } and the uniform distribution is “rounded” into a Bernoulli distribution with p=1/2 𝑝 1 2 p=1/2 italic_p = 1 / 2. So Var⁢(x T=0)=1/4 Var subscript 𝑥 𝑇 0 1 4\mathrm{Var}(x_{T=0})=1/4 roman_Var ( italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT ) = 1 / 4. In this case, reducing the sampling temperature, and thereby eliminating the conditional variance, has inadvertently tripled the minimum variance in the score data from 1/12 to 1/4.

In the above case, 𝔼⁢[x T=1]=𝔼⁢[x T=0]𝔼 delimited-[]subscript 𝑥 𝑇 1 𝔼 delimited-[]subscript 𝑥 𝑇 0\mathbb{E}[x_{T=1}]=\mathbb{E}[x_{T=0}]blackboard_E [ italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ] = blackboard_E [ italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT ], but this does not always hold. Consider a similar (single-token, true/false) case where x T=1∼U⁢[1/3,1]similar-to subscript 𝑥 𝑇 1 𝑈 1 3 1 x_{T=1}\sim U[1/3,1]italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ∼ italic_U [ 1 / 3 , 1 ] and (as a consequence) x T=0 subscript 𝑥 𝑇 0 x_{T=0}italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT is rounded to a Bernoulli distribution with p=1/4 𝑝 1 4 p=1/4 italic_p = 1 / 4. Then 𝔼⁢[x T=1]=2/3<𝔼⁢[x T=0]=3/4 𝔼 delimited-[]subscript 𝑥 𝑇 1 2 3 𝔼 delimited-[]subscript 𝑥 𝑇 0 3 4\mathbb{E}[x_{T=1}]=2/3<\mathbb{E}[x_{T=0}]=3/4 blackboard_E [ italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ] = 2 / 3 < blackboard_E [ italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT ] = 3 / 4 and Var⁢(x T=1)=1/27≪Var⁢(x T=0)=3/16 Var subscript 𝑥 𝑇 1 1 27 much-less-than Var subscript 𝑥 𝑇 0 3 16\mathrm{Var}(x_{T=1})=1/27\ll\mathrm{Var}(x_{T=0})=3/16 roman_Var ( italic_x start_POSTSUBSCRIPT italic_T = 1 end_POSTSUBSCRIPT ) = 1 / 27 ≪ roman_Var ( italic_x start_POSTSUBSCRIPT italic_T = 0 end_POSTSUBSCRIPT ) = 3 / 16; that is, not only has the temperature change shifted the expected score, but the variance of the conditional means has increased approximately five-fold.

We therefore recommend a two-pronged variance-reduction strategy. When next-token probabilities are available, and the language model eval can be conducted using next-token probabilities (i.e. without token generation), compute the expected score for each question, and compute the standard error of expected scores across questions. When next-token probabilities are not available, or the answer requires a chain of thought or other complex interaction, choose a K 𝐾 K italic_K such that 𝔼⁢[σ i 2]/K≪Var⁢(x)much-less-than 𝔼 delimited-[]superscript subscript 𝜎 𝑖 2 𝐾 Var 𝑥\mathbb{E}[\sigma_{i}^{2}]/K\ll\mathrm{Var}(x)blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] / italic_K ≪ roman_Var ( italic_x ) and compute the standard error across question-level mean scores. In neither case should the sampling temperature be adjusted for the sake of reducing variance in the scores.

4 Comparing models

Thus far we have only analyzed the standard error of eval scores considered in isolation. But a particular model’s score on a given eval usually does not have any inherent meaning; it primarily makes sense in relation to the scores of other models. In this section we provide formulas for comparing the scores of two models so that an analyst might determine if a model is outperforming another model in a statistically significant way, or if the difference between two models is indistinguishable from noise.

4.1 Unpaired analysis

We introduce model subscripts A 𝐴 A italic_A and B 𝐵 B italic_B for the remainder of the article. A naive comparison between eval scores can be made by computing the difference between mean eval scores

μ^A−B=μ^A−μ^B subscript^𝜇 𝐴 𝐵 subscript^𝜇 𝐴 subscript^𝜇 𝐵\hat{\mu}{A-B}=\hat{\mu}{A}-\hat{\mu}_{B}over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT = over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT

and an associated standard error

SE A−B=SE A 2+SE B 2 subscript SE 𝐴 𝐵 superscript subscript SE 𝐴 2 superscript subscript SE 𝐵 2\mathrm{SE}{A-B}=\sqrt{\mathrm{SE}{A}^{2}+\mathrm{SE}_{B}^{2}}roman_SE start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT = square-root start_ARG roman_SE start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + roman_SE start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG

This two quantities can be used to compute the usual 95% confidence interval and z-score

CI A−B,95%=μ^A−B±1.96×SE A−B subscript CI 𝐴 𝐵 percent 95 plus-or-minus subscript^𝜇 𝐴 𝐵 1.96 subscript SE 𝐴 𝐵\mathrm{CI}{A-B,95%}=\hat{\mu}{A-B}\pm 1.96\times\mathrm{SE}_{A-B}roman_CI start_POSTSUBSCRIPT italic_A - italic_B , 95 % end_POSTSUBSCRIPT = over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ± 1.96 × roman_SE start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT(5)

z A−B=μ^A−B/SE A−B subscript 𝑧 𝐴 𝐵 subscript^𝜇 𝐴 𝐵 subscript SE 𝐴 𝐵 z_{A-B}=\hat{\mu}{A-B}/\mathrm{SE}{A-B}italic_z start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT = over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT / roman_SE start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT(6)

If two models independently report their eval scores and standard errors, it is possible for an analyst to test their difference for statistical significance – even if the two model reports used non-identical random subsets of eval questions.

4.2 Paired analysis

The naive comparison above misses an opportunity to reduce the standard error when two models evaluate the same set of questions. Let s A−B,i=s A,i−s B,i subscript 𝑠 𝐴 𝐵 𝑖 subscript 𝑠 𝐴 𝑖 subscript 𝑠 𝐵 𝑖 s_{A-B,i}=s_{A,i}-s_{B,i}italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_i end_POSTSUBSCRIPT = italic_s start_POSTSUBSCRIPT italic_A , italic_i end_POSTSUBSCRIPT - italic_s start_POSTSUBSCRIPT italic_B , italic_i end_POSTSUBSCRIPT represent the difference between scores on question i 𝑖 i italic_i, and let s¯A−B=s¯A−s¯B subscript¯𝑠 𝐴 𝐵 subscript¯𝑠 𝐴 subscript¯𝑠 𝐵\bar{s}{A-B}=\bar{s}{A}-\bar{s}_{B}over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT = over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT represent the observed average difference. Then we can estimate the standard error of the estimated difference as

SE A−B,paired=Var⁢(s A−B)/n=(1 n−1⁢∑i(s A−B,i−s¯A−B)2)/n subscript SE 𝐴 𝐵 paired Var subscript 𝑠 𝐴 𝐵 𝑛 1 𝑛 1 subscript 𝑖 superscript subscript 𝑠 𝐴 𝐵 𝑖 subscript¯𝑠 𝐴 𝐵 2 𝑛\mathrm{SE}{A-B,{\rm paired}}=\sqrt{\mathrm{Var}(s{A-B})/n}=\sqrt{\left(% \frac{1}{n-1}\sum_{i}(s_{A-B,i}-\bar{s}_{A-B})^{2}\right)/n}roman_SE start_POSTSUBSCRIPT italic_A - italic_B , roman_paired end_POSTSUBSCRIPT = square-root start_ARG roman_Var ( italic_s start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) / italic_n end_ARG = square-root start_ARG ( divide start_ARG 1 end_ARG start_ARG italic_n - 1 end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_i end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) / italic_n end_ARG(7)

This revised standard error can be plugged into Equations 5 and 6 to compute a confidence interval and z-score.

We can compute the reduction in variance achieved with this paired differences test over the unpaired test. First, write out an expression for the variance in the unpaired case

Var⁢(μ^A−B,unpaired)=(Var⁢(s A)+Var⁢(s B))/n Var subscript^𝜇 𝐴 𝐵 unpaired Var subscript 𝑠 𝐴 Var subscript 𝑠 𝐵 𝑛\mathrm{Var}(\hat{\mu}{A-B,{\rm unpaired}})=(\mathrm{Var}(s{A})+\mathrm{Var}% (s_{B}))/n roman_Var ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B , roman_unpaired end_POSTSUBSCRIPT ) = ( roman_Var ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) + roman_Var ( italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) ) / italic_n

and the paired case

Var⁢(μ^A−B,paired)=(Var⁢(s A)+Var⁢(s B)−2⁢Cov⁢(s A,s B))/n Var subscript^𝜇 𝐴 𝐵 paired Var subscript 𝑠 𝐴 Var subscript 𝑠 𝐵 2 Cov subscript 𝑠 𝐴 subscript 𝑠 𝐵 𝑛\mathrm{Var}(\hat{\mu}{A-B,{\rm paired}})=(\mathrm{Var}(s{A})+\mathrm{Var}(s% {B})-2\ \mathrm{Cov}(s{A},s_{B}))/n roman_Var ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B , roman_paired end_POSTSUBSCRIPT ) = ( roman_Var ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) + roman_Var ( italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) - 2 roman_Cov ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) ) / italic_n

Combining, and recognizing that the cross-model residuals are uncorrelated, we have

Var⁢(μ^A−B,paired)=Var⁢(μ^A−B,unpaired)−2⁢Cov⁢(x A,x B)/n Var subscript^𝜇 𝐴 𝐵 paired Var subscript^𝜇 𝐴 𝐵 unpaired 2 Cov subscript 𝑥 𝐴 subscript 𝑥 𝐵 𝑛\mathrm{Var}(\hat{\mu}{A-B,{\rm paired}})=\mathrm{Var}(\hat{\mu}{A-B,{\rm unpaired% }})-2\ \mathrm{Cov}(x_{A},x_{B})/n roman_Var ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B , roman_paired end_POSTSUBSCRIPT ) = roman_Var ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B , roman_unpaired end_POSTSUBSCRIPT ) - 2 roman_Cov ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_n

We can thus reduce the variance with paired differences as long as the conditional means of the model scores are correlated; that is to say, if the two models have some amount of agreement on which questions are “easy” and which questions are “hard”.

A short calculation will demonstrate the degree of variance reduction that might be expected. Suppose an eval uses next-token probabilities to form continuous scores with zero conditional variance, and that these scores are uniformly distributed for two models over the [0,1]0 1[0,1][ 0 , 1 ] interval. Suppose that the model scores have a correlation coefficient of 0.5. Then Var⁢(s A)=Var⁢(s B)=1/12 Var subscript 𝑠 𝐴 Var subscript 𝑠 𝐵 1 12\mathrm{Var}(s_{A})=\mathrm{Var}(s_{B})=1/12 roman_Var ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = roman_Var ( italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) = 1 / 12 and Cov⁢(x A,x B)=0.5⁢Var⁢(s A)⁢Var⁢(s B)=1/24 Cov subscript 𝑥 𝐴 subscript 𝑥 𝐵 0.5 Var subscript 𝑠 𝐴 Var subscript 𝑠 𝐵 1 24\mathrm{Cov}(x_{A},x_{B})=0.5\sqrt{\mathrm{Var}(s_{A})\mathrm{Var}(s_{B})}=1/24 roman_Cov ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) = 0.5 square-root start_ARG roman_Var ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) roman_Var ( italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) end_ARG = 1 / 24. In this case, using paired differences will reduce the variance of the estimator by 1/3 in relative terms (that is, from 1/6 to 1/9 in absolute terms).

Because eval question scores are likely to be positively correlated, even across unrelated models, paired differences represent a “free” reduction in estimator variance when comparing two models. We therefore recommend using the paired version of the standard error estimate wherever practicable. We encourage authors of technical reports to include pairwise differences, pairwise standard errors, and score correlations whenever two or more models are being evaluated. Pairwise standard errors may be computed either directly on the differences, or using the single-sample standard errors, the Pearson product-moment correlation, and the relation

SE A−B,paired=SE A 2+SE B 2−2⁢SE A⁢SE B⁢Corr⁢(s A,s B)subscript SE 𝐴 𝐵 paired superscript subscript SE 𝐴 2 superscript subscript SE 𝐵 2 2 subscript SE 𝐴 subscript SE 𝐵 Corr subscript 𝑠 𝐴 subscript 𝑠 𝐵\mathrm{SE}{A-B,{\rm paired}}=\sqrt{\mathrm{SE}{A}^{2}+\mathrm{SE}{B}^{2}-2% \ \mathrm{SE}{A}\mathrm{SE}{B}\mathrm{Corr}(s{A},s_{B})}roman_SE start_POSTSUBSCRIPT italic_A - italic_B , roman_paired end_POSTSUBSCRIPT = square-root start_ARG roman_SE start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + roman_SE start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 2 roman_SE start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT roman_SE start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT roman_Corr ( italic_s start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) end_ARG

A clustered version of the standard error, appropriate for DROP, QuAC, RACE, SQuAD, MGSM, and other evals where questions are drawn in related groups, is directly computable from the differences as

SE A−B,paired,clustered=1 n⁢(∑c∑i∑j(s A−B,i,c−s¯A−B)⁢(s A−B,j,c−s¯A−B))1/2 subscript SE 𝐴 𝐵 paired clustered 1 𝑛 superscript subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑠 𝐴 𝐵 𝑖 𝑐 subscript¯𝑠 𝐴 𝐵 subscript 𝑠 𝐴 𝐵 𝑗 𝑐 subscript¯𝑠 𝐴 𝐵 1 2\mathrm{SE}{A-B,{\rm paired},{\rm clustered}}=\frac{1}{n}\left(\sum{c}\sum_{% i}\sum_{j}(s_{A-B,i,c}-\bar{s}{A-B})(s{A-B,j,c}-\bar{s}_{A-B})\right)^{1/2}roman_SE start_POSTSUBSCRIPT italic_A - italic_B , roman_paired , roman_clustered end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ( ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) ( italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) ) start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT(8)

where s A−B,i,c subscript 𝑠 𝐴 𝐵 𝑖 𝑐 s_{A-B,i,c}italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_i , italic_c end_POSTSUBSCRIPT denotes the score difference on the i 𝑖 i italic_i th question within cluster c 𝑐 c italic_c.

A suggested table format for reporting pairwise results is provided in Table 5. A 95% confidence interval on model differences may be computed from the base estimate and standard error, as in Equation 3.

Eval Model Baseline Model – Baseline 95% Conf. Interval Correlation MATH Galleon Dreadnought+2.5% (0.7%)(+1.2%, +3.8%)0.50 HumanEval Galleon Dreadnought−3.1%percent 3.1-3.1%- 3.1 % (2.1%)(−7.2%,+1.0%)percent 7.2 percent 1.0(-7.2%,+1.0%)( - 7.2 % , + 1.0 % )0.64 MGSM Galleon Dreadnought−2.7%percent 2.7-2.7%- 2.7 % (1.7%)(−6.1%,+0.7%)percent 6.1 percent 0.7(-6.1%,+0.7%)( - 6.1 % , + 0.7 % )0.37

Table 5: Suggested presentation of pairwise differences, standard errors, confidence intervals, and correlation values as a supplement to main results. In the fictional data above, the difference between the two models on MATH is statistically significant (the confidence interval is positive), but the differences on HumanEval and MGSM are not statistically significant at the 5% level.

We now possess the analytic tools needed to rigorously answer the questions posed in the Introduction. Using pairwise analysis on all three evals, and ensuring that standard errors were appropriately clustered on MSGM, the numbers in Table 5 would lead us to conclude that the Galleon indeed outperformed Dreadnought on MATH in a statistically significant way – but that the differences on HumanEval and MGSM are indistinguishable from statistical noise. In other words, while a superficial reading of the eval data might have originally tempted us to conclude that Dreadnought was the overall better-performing model, a closer examination of the data would tend to lead the careful analyst to the opposite conclusion.

5 Power analysis

Power refers to the ability of an experiment to make a measurement of interest in the presence of statistical noise.[NBERw15701] In the context of language model evals, we may wish to know whether a model represents an improvement of some magnitude over another model.[card2020littlepowercomesgreat] Due to the variance implied by sampling questions from the super-population (plus the conditional variance after the questions are chosen), power must always be defined in terms of probability. In this section we present a sample-size formula needed to conduct power analysis for language model evals, and apply it in a worked example to answer the empirical question posed in Section 1.

The sample-size formula – describing the relationship between the hypothesized difference between two models and the number of questions included in an experiment – ought to prove useful in several ways. Consumers of existing evals may use the formula to determine the number of questions to subsample from a large eval, or to determine an appropriate value of K 𝐾 K italic_K defined in Section 3.1. If the number of questions in the eval is fixed, consumers can calculate the Minimum Detectable Effect and decide whether the eval is worth running. The authors of new evals may use the formula to decide how many questions should be commissioned.

The inputs into the sample-size formula include:

  • •Significance level α 𝛼\alpha italic_α, which represents the Type I error rate under the null hypothesis
  • •Power level 1−β 1 𝛽 1-\beta 1 - italic_β, where β 𝛽\beta italic_β represents the Type II error rate under the alternative hypothesis
  • •Minimum Detectable Effect δ 𝛿\delta italic_δ, which represents the mean score difference between two models under the alternative hypothesis

To simplify notation, let

ω 2=Var⁢(x A)+V⁢a⁢r⁢(x B)−2⁢C⁢o⁢v⁢(x A,x B)superscript 𝜔 2 Var subscript 𝑥 𝐴 𝑉 𝑎 𝑟 subscript 𝑥 𝐵 2 C o v subscript 𝑥 𝐴 subscript 𝑥 𝐵\omega^{2}=\mathrm{Var}(x_{A})+Var(x_{B})-2\mathrm{Cov}(x_{A},x_{B})italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = roman_Var ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) + italic_V italic_a italic_r ( italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) - 2 roman_C roman_o roman_v ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT )

σ A 2=𝔼⁢[σ A,i 2]superscript subscript 𝜎 𝐴 2 𝔼 delimited-[]superscript subscript 𝜎 𝐴 𝑖 2\sigma_{A}^{2}=\mathbb{E}[\sigma_{A,i}^{2}]italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_A , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ]

σ B 2=𝔼⁢[σ B,i 2]superscript subscript 𝜎 𝐵 2 𝔼 delimited-[]superscript subscript 𝜎 𝐵 𝑖 2\sigma_{B}^{2}=\mathbb{E}[\sigma_{B,i}^{2}]italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = blackboard_E [ italic_σ start_POSTSUBSCRIPT italic_B , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ]

Let z p subscript 𝑧 𝑝 z_{p}italic_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT represent the (1−p)1 𝑝(1-p)( 1 - italic_p )th percentile of a standard normal distribution. We assume a paired analysis described in Section 4.2, and that answers will be sampled K A subscript 𝐾 𝐴 K_{A}italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT times from model A 𝐴 A italic_A and K B subscript 𝐾 𝐵 K_{B}italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT times from model B 𝐵 B italic_B (in the simplest case, K A=K B=1 subscript 𝐾 𝐴 subscript 𝐾 𝐵 1 K_{A}=K_{B}=1 italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT = 1). Then the number of independent questions n 𝑛 n italic_n required to achieve a Type I error rate α 𝛼\alpha italic_α and Type II error rate β 𝛽\beta italic_β with a given Minimum Detectable Effect δ 𝛿\delta italic_δ is

n=(z α/2+z β)2⁢(ω 2+σ A 2/K A+σ B 2/K B)/δ 2 𝑛 superscript subscript 𝑧 𝛼 2 subscript 𝑧 𝛽 2 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 superscript 𝛿 2 n=(z_{\alpha/2}+z_{\beta})^{2}(\omega^{2}+\sigma_{A}^{2}/K_{A}+\sigma_{B}^{2}/% K_{B})/\delta^{2}italic_n = ( italic_z start_POSTSUBSCRIPT italic_α / 2 end_POSTSUBSCRIPT + italic_z start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT(9)

The quantities ω 2 superscript 𝜔 2\omega^{2}italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, σ A 2 superscript subscript 𝜎 𝐴 2\sigma_{A}^{2}italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and σ B 2 superscript subscript 𝜎 𝐵 2\sigma_{B}^{2}italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT may be estimated from previous eval data. A short derivation of the above formula is presented in Appendix B.

As a simple example, suppose σ A 2=σ B 2=0 superscript subscript 𝜎 𝐴 2 superscript subscript 𝜎 𝐵 2 0\sigma_{A}^{2}=\sigma_{B}^{2}=0 italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0 and ω 2=1/9 superscript 𝜔 2 1 9\omega^{2}=1/9 italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 1 / 9, following the conditions described in Section 4.2. Suppose we wish to detect an absolute difference of δ=0.03 𝛿 0.03\delta=0.03 italic_δ = 0.03 at least 80% of the time (β=0.20 𝛽 0.20\beta=0.20 italic_β = 0.20) with a false-positive rate of 5% (α=0.05 𝛼 0.05\alpha=0.05 italic_α = 0.05). Then the eval will need to contain at least

n=(z 0.025+z 0.20)2⁢(1/9)/(0.03)2≈969 𝑛 superscript subscript 𝑧 0.025 subscript 𝑧 0.20 2 1 9 superscript 0.03 2 969 n=(z_{0.025}+z_{0.20})^{2}(1/9)/(0.03)^{2}\approx 969 italic_n = ( italic_z start_POSTSUBSCRIPT 0.025 end_POSTSUBSCRIPT + italic_z start_POSTSUBSCRIPT 0.20 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( 1 / 9 ) / ( 0.03 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≈ 969

independent questions. Although these parameters are fictional, they are reasonable, and suggest that new evals should contain at least 1,000 questions in order to have good signaling ability.

If the number of questions is fixed, and the practitioner wishes to know the Minimum Detectable Effect associated with n 𝑛 n italic_n, Equation 9 is easily inverted as

δ=(z α/2+z B)⁢(ω 2+σ A 2/K A+σ B 2/K B)/n 𝛿 subscript 𝑧 𝛼 2 subscript 𝑧 𝐵 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 𝑛\delta=(z_{\alpha/2}+z_{B})\sqrt{(\omega^{2}+\sigma_{A}^{2}/K_{A}+\sigma_{B}^{% 2}/K_{B})/n}italic_δ = ( italic_z start_POSTSUBSCRIPT italic_α / 2 end_POSTSUBSCRIPT + italic_z start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) square-root start_ARG ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_n end_ARG(10)

The above equation may be used, for instance, to predict the effect of increasing the per-question sample counts K A subscript 𝐾 𝐴 K_{A}italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and K B subscript 𝐾 𝐵 K_{B}italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT on the Minimum Detectable Effect in a nondeterministic eval. Suppose that σ A 2=σ B 2=1/6 superscript subscript 𝜎 𝐴 2 superscript subscript 𝜎 𝐵 2 1 6\sigma_{A}^{2}=\sigma_{B}^{2}=1/6 italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 1 / 6 and ω 2=1/9 superscript 𝜔 2 1 9\omega^{2}=1/9 italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 1 / 9, following the conditions of Section 3.1 with an additional assumption that Corr⁢(x A,x B)=0.5 Corr subscript 𝑥 𝐴 subscript 𝑥 𝐵 0.5\mathrm{Corr}(x_{A},x_{B})=0.5 roman_Corr ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) = 0.5. Suppose n=198 𝑛 198 n=198 italic_n = 198, α=0.05 𝛼 0.05\alpha=0.05 italic_α = 0.05, and β=0.20 𝛽 0.20\beta=0.20 italic_β = 0.20. It follows from Equation 10 that increasing K A=K B subscript 𝐾 𝐴 subscript 𝐾 𝐵 K_{A}=K_{B}italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT from 1 to 10 reduces the Minimum Detectable Effect from 13.2% to 7.5%.

Cluster-adjusted versions of Equations 9 and 10 are included in Appendix C.

6 Conclusion

This article has presented a statistical treatment of language model evaluations, drawing heavily from existing literature in experiment design.

For single-model analysis, we presented analytic formulas for naive and clustered standard errors, and for two-model analysis, we presented formulas for unpaired, paired, and paired-and-clustered standard errors. We recommended several techniques for reducing the variance of estimates, including resampling answers, analyzing next-token probabilities, and computing question-level differences between models, and advised against adjusting the sampling temperature for the sake of variance reduction. We suggest that practitioners include standard errors of their eval scores in their technical reports, and also include pairwise differences, pairwise standard errors, and score correlations when multiple models are being compared. We presented a sample-size formula so that model evaluators can determine in advance the size of difference that may be reliably detected between two models on a given eval using a given resampling strategy.

Experiment design represents a large and venerable literature. We hope that with proper statistical tools, such as those presented in this article, machine learning practitioners will think of their model evaluations as informative experiments rather than a series of contests to produce the largest number. We encourage researchers to continue exploring statistical techniques found in other experimental fields in order to further enrich our shared understanding of language models and their capabilities.

\printbibliography

Appendix A Clustered standard errors

We approach the problem with linear regression. Let s i,c subscript 𝑠 𝑖 𝑐 s_{i,c}italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT denote the i 𝑖 i italic_i th of n c subscript 𝑛 𝑐 n_{c}italic_n start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT question scores within cluster c 𝑐 c italic_c, decomposed into a mean and random component as

s i,c=x i,c+ϵ i,c subscript 𝑠 𝑖 𝑐 subscript 𝑥 𝑖 𝑐 subscript italic-ϵ 𝑖 𝑐 s_{i,c}=x_{i,c}+\epsilon_{i,c}italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT

Let δ i,c=x i,c−μ subscript 𝛿 𝑖 𝑐 subscript 𝑥 𝑖 𝑐 𝜇\delta_{i,c}=x_{i,c}-\mu italic_δ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - italic_μ represent the deviation of the conditional (question-level) mean from the true mean (that is, the hypothetical mean across all questions and clusters). Then the regression can be specified as

s i,c=μ+δ i,c+ϵ i,c subscript 𝑠 𝑖 𝑐 𝜇 subscript 𝛿 𝑖 𝑐 subscript italic-ϵ 𝑖 𝑐 s_{i,c}=\mu+\delta_{i,c}+\epsilon_{i,c}italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT = italic_μ + italic_δ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT

where ϵ i,c subscript italic-ϵ 𝑖 𝑐\epsilon_{i,c}italic_ϵ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT is a random component and δ i,c subscript 𝛿 𝑖 𝑐\delta_{i,c}italic_δ start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT acts as a question-level fixed effect that is not separately estimated. We continue to estimate μ^=s¯^𝜇¯𝑠\hat{\mu}=\bar{s}over^ start_ARG italic_μ end_ARG = over¯ start_ARG italic_s end_ARG and denote the regression residual u i,c=s i,c−s¯subscript 𝑢 𝑖 𝑐 subscript 𝑠 𝑖 𝑐¯𝑠 u_{i,c}=s_{i,c}-\bar{s}italic_u start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT = italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG. The traditional clustered standard error formula is

Var clustered⁢(μ^)=(X′⁢X)−1⁢(∑c X c′⁢Ω⁢X c)⁢(X′⁢X)−1 subscript Var clustered^𝜇 superscript superscript 𝑋′𝑋 1 subscript 𝑐 superscript subscript 𝑋 𝑐′Ω subscript 𝑋 𝑐 superscript superscript 𝑋′𝑋 1\mathrm{Var}{\rm clustered}(\hat{\mu})=(X^{\prime}X)^{-1}\left(\sum{c}X_{c}^% {\prime}\Omega X_{c}\right)(X^{\prime}X)^{-1}roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG ) = ( italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_X ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT roman_Ω italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) ( italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_X ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT

where X 𝑋 X italic_X represents the full matrix of covariates, X c subscript 𝑋 𝑐 X_{c}italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT represents the covariates within cluster c 𝑐 c italic_c, and Ω c subscript Ω 𝑐\Omega_{c}roman_Ω start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT represents the residual covariance matrix within cluster c 𝑐 c italic_c.

In our application, X=1 n 𝑋 subscript 1 𝑛 X=1_{n}italic_X = 1 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT (a vector of n 𝑛 n italic_n 1s), X c=1 n c subscript 𝑋 𝑐 subscript 1 subscript 𝑛 𝑐 X_{c}=1_{n_{c}}italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 1 start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUBSCRIPT (a vector of n c subscript 𝑛 𝑐 n_{c}italic_n start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT 1s), and Ω c=u c⁢u c′subscript Ω 𝑐 subscript 𝑢 𝑐 superscript subscript 𝑢 𝑐′\Omega_{c}=u_{c}u_{c}^{\prime}roman_Ω start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = italic_u start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Thus (X′⁢X)−1=1/n superscript superscript 𝑋′𝑋 1 1 𝑛(X^{\prime}X)^{-1}=1/n( italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_X ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = 1 / italic_n and X c′⁢Ω c⁢X c=∑i∑j u i,c⁢u j,c=∑i∑j(s i,c−s¯)⁢(s j,c−s¯)superscript subscript 𝑋 𝑐′subscript Ω 𝑐 subscript 𝑋 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑢 𝑖 𝑐 subscript 𝑢 𝑗 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑠 𝑖 𝑐¯𝑠 subscript 𝑠 𝑗 𝑐¯𝑠 X_{c}^{\prime}\Omega_{c}X_{c}=\sum_{i}\sum_{j}u_{i,c}u_{j,c}=\sum_{i}\sum_{j}(% s_{i,c}-\bar{s})(s_{j,c}-\bar{s})italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT roman_Ω start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ( italic_s start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ). Plugging in these values yields

Var clustered⁢(μ^)=∑c∑i∑j(s i,c−s¯)⁢(s j,c−s¯)/n 2=∑c∑i(s i,c−s¯)2/n 2+∑c∑i∑j≠i(s i,c−s¯)⁢(s j,c−s¯)/n 2 subscript Var clustered^𝜇 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑠 𝑖 𝑐¯𝑠 subscript 𝑠 𝑗 𝑐¯𝑠 superscript 𝑛 2 subscript 𝑐 subscript 𝑖 superscript subscript 𝑠 𝑖 𝑐¯𝑠 2 superscript 𝑛 2 subscript 𝑐 subscript 𝑖 subscript 𝑗 𝑖 subscript 𝑠 𝑖 𝑐¯𝑠 subscript 𝑠 𝑗 𝑐¯𝑠 superscript 𝑛 2\mathrm{Var}{\rm clustered}(\hat{\mu})=\sum{c}\sum_{i}\sum_{j}(s_{i,c}-\bar{% s})(s_{j,c}-\bar{s})/n^{2}=\sum_{c}\sum_{i}(s_{i,c}-\bar{s})^{2}/n^{2}+\sum_{c% }\sum_{i}\sum_{j\neq i}(s_{i,c}-\bar{s})(s_{j,c}-\bar{s})/n^{2}roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG ) = ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ( italic_s start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) / italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ( italic_s start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) / italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT

Recognizing that the first term is equal to the unclustered variance, we can write

Var clustered⁢(μ^)=Var unclustered⁢(μ^)+∑c∑i∑j≠i(s i,c−s¯)⁢(s j,c−s¯)/n 2 subscript Var clustered^𝜇 subscript Var unclustered^𝜇 subscript 𝑐 subscript 𝑖 subscript 𝑗 𝑖 subscript 𝑠 𝑖 𝑐¯𝑠 subscript 𝑠 𝑗 𝑐¯𝑠 superscript 𝑛 2\mathrm{Var}{\rm clustered}(\hat{\mu})=\mathrm{Var}{\rm unclustered}(\hat{% \mu})+\sum_{c}\sum_{i}\sum_{j\neq i}(s_{i,c}-\bar{s})(s_{j,c}-\bar{s})/n^{2}roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG ) = roman_Var start_POSTSUBSCRIPT roman_unclustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG ) + ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) ( italic_s start_POSTSUBSCRIPT italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG ) / italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT

The two-sample version can be developed by analyzing the question-level score differences rather than the scores.

Appendix B Sample-size formula derivation

Following [NBERw15701], we set up the power analysis with a hypothetical measurement sA−B subscript𝑠 𝐴 𝐵\tilde{s}_{A-B}over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT that will trigger a Type I error with probability α 𝛼\alpha italic_α and a Type II error rate with probability β 𝛽\beta italic_β. The z-scores on such a measurement under the null hypothesis and the alternative hypothesis are

z α/2=sA−B/(ω 2+σ A 2/K A+σ B 2/K B)/n subscript 𝑧 𝛼 2 subscript𝑠 𝐴 𝐵 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 𝑛 z_{\alpha/2}=\tilde{s}{A-B}/\sqrt{(\omega^{2}+\sigma{A}^{2}/K_{A}+\sigma_{B}% ^{2}/K_{B})/n}italic_z start_POSTSUBSCRIPT italic_α / 2 end_POSTSUBSCRIPT = over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT / square-root start_ARG ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_n end_ARG

z β=(δ−sA−B)/(ω 2+σ A 2/K A+σ B 2/K B)/n subscript 𝑧 𝛽 𝛿 subscript𝑠 𝐴 𝐵 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 𝑛 z_{\beta}=(\delta-\tilde{s}{A-B})/\sqrt{(\omega^{2}+\sigma{A}^{2}/K_{A}+% \sigma_{B}^{2}/K_{B})/n}italic_z start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT = ( italic_δ - over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) / square-root start_ARG ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_n end_ARG

Combining to eliminate sA−B subscript𝑠 𝐴 𝐵\tilde{s}_{A-B}over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT, we have an expression for the MDE in terms of the other variables

δ=(z α/2+z B)⁢(ω 2+σ A 2/K A+σ B 2/K B)/n 𝛿 subscript 𝑧 𝛼 2 subscript 𝑧 𝐵 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 𝑛\delta=(z_{\alpha/2}+z_{B})\sqrt{(\omega^{2}+\sigma_{A}^{2}/K_{A}+\sigma_{B}^{% 2}/K_{B})/n}italic_δ = ( italic_z start_POSTSUBSCRIPT italic_α / 2 end_POSTSUBSCRIPT + italic_z start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) square-root start_ARG ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_n end_ARG

Or inverting the equation, we have a sample-size formula for the number of questions n 𝑛 n italic_n required to produce a desired MDE δ 𝛿\delta italic_δ

n=(z α/2+z β)2⁢(ω 2+σ A 2/K A+σ B 2/K B)/δ 2 𝑛 superscript subscript 𝑧 𝛼 2 subscript 𝑧 𝛽 2 superscript 𝜔 2 superscript subscript 𝜎 𝐴 2 subscript 𝐾 𝐴 superscript subscript 𝜎 𝐵 2 subscript 𝐾 𝐵 superscript 𝛿 2 n=(z_{\alpha/2}+z_{\beta})^{2}(\omega^{2}+\sigma_{A}^{2}/K_{A}+\sigma_{B}^{2}/% K_{B})/\delta^{2}italic_n = ( italic_z start_POSTSUBSCRIPT italic_α / 2 end_POSTSUBSCRIPT + italic_z start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_K start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) / italic_δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT

Appendix C Cluster-adjusted sample-size formula

In order to account for clustered questions, the sample-size formula requires cluster-adjusted versions of ω 2 superscript 𝜔 2\omega^{2}italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, σ A 2 superscript subscript 𝜎 𝐴 2\sigma_{A}^{2}italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and σ B 2 superscript subscript 𝜎 𝐵 2\sigma_{B}^{2}italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. In this section we develop formulas for estimating these quantities from previous eval data.

Start with the clustered score variance estimator

Var clustered⁢(μ^A−B)=1 n⁢∑c∑i∑j(s A−B,i,c−s¯A−B)⁢(s A−B,j,c−s¯A−B)subscript Var clustered subscript^𝜇 𝐴 𝐵 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑠 𝐴 𝐵 𝑖 𝑐 subscript¯𝑠 𝐴 𝐵 subscript 𝑠 𝐴 𝐵 𝑗 𝑐 subscript¯𝑠 𝐴 𝐵\mathrm{Var}{\rm clustered}(\hat{\mu}{A-B})=\frac{1}{n}\sum_{c}\sum_{i}\sum_% {j}(s_{A-B,i,c}-\bar{s}{A-B})(s{A-B,j,c}-\bar{s}_{A-B})roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) ( italic_s start_POSTSUBSCRIPT italic_A - italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT )

we can decompose s 𝑠 s italic_s into x 𝑥 x italic_x and ϵ italic-ϵ\epsilon italic_ϵ,

Var clustered⁢(μ^A−B)=1 n⁢∑c∑i∑j(x A,i,c−x B,i,c+ϵ A,i,c−ϵ B,i,c−s¯A−B)⁢(x A,j,c−x B,j,c+ϵ A,j,c−ϵ B,j,c−s¯A−B)subscript Var clustered subscript^𝜇 𝐴 𝐵 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑥 𝐴 𝑖 𝑐 subscript 𝑥 𝐵 𝑖 𝑐 subscript italic-ϵ 𝐴 𝑖 𝑐 subscript italic-ϵ 𝐵 𝑖 𝑐 subscript¯𝑠 𝐴 𝐵 subscript 𝑥 𝐴 𝑗 𝑐 subscript 𝑥 𝐵 𝑗 𝑐 subscript italic-ϵ 𝐴 𝑗 𝑐 subscript italic-ϵ 𝐵 𝑗 𝑐 subscript¯𝑠 𝐴 𝐵\mathrm{Var}{\rm clustered}(\hat{\mu}{A-B})=\frac{1}{n}\sum_{c}\sum_{i}\sum_% {j}(x_{A,i,c}-x_{B,i,c}+\epsilon_{A,i,c}-\epsilon_{B,i,c}-\bar{s}{A-B})(x{A,% j,c}-x_{B,j,c}+\epsilon_{A,j,c}-\epsilon_{B,j,c}-\bar{s}_{A-B})roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT - italic_ϵ start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT - italic_ϵ start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT )

which, dropping cross-terms which are zero in expectation, will reduce to

Var clustered⁢(μ^A−B)=1 n⁢∑c∑i∑j(x A,i,c−x B,i,c−s¯A−B)⁢(x A,j,c−x B,j,c−s¯A−B)subscript Var clustered subscript^𝜇 𝐴 𝐵 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑥 𝐴 𝑖 𝑐 subscript 𝑥 𝐵 𝑖 𝑐 subscript¯𝑠 𝐴 𝐵 subscript 𝑥 𝐴 𝑗 𝑐 subscript 𝑥 𝐵 𝑗 𝑐 subscript¯𝑠 𝐴 𝐵\mathrm{Var}{\rm clustered}(\hat{\mu}{A-B})=\frac{1}{n}\sum_{c}\sum_{i}\sum_% {j}(x_{A,i,c}-x_{B,i,c}-\bar{s}{A-B})(x{A,j,c}-x_{B,j,c}-\bar{s}_{A-B})roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT )

+1 n⁢∑c∑i∑j ϵ A,i,c⁢ϵ A,j,c+1 n⁢∑c∑i∑j ϵ B,i,c⁢ϵ B,j,c 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript italic-ϵ 𝐴 𝑖 𝑐 subscript italic-ϵ 𝐴 𝑗 𝑐 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript italic-ϵ 𝐵 𝑖 𝑐 subscript italic-ϵ 𝐵 𝑗 𝑐+\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}\epsilon_{A,i,c}\epsilon_{A,j,c}+\frac{1}{% n}\sum_{c}\sum_{i}\sum_{j}\epsilon_{B,i,c}\epsilon_{B,j,c}+ divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT + divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT

We can denote the three terms on the right-hand side as

Var clustered⁢(m⁢u^A−B)=ω clustered 2+σ A,clustered 2+σ B,clustered 2 subscript Var clustered subscript^𝑚 𝑢 𝐴 𝐵 superscript subscript 𝜔 clustered 2 superscript subscript 𝜎 𝐴 clustered 2 superscript subscript 𝜎 𝐵 clustered 2\mathrm{Var}{\rm clustered}(\hat{mu}{A-B})=\omega_{\rm clustered}^{2}+\sigma% {A,{\rm clustered}}^{2}+\sigma{B,{\rm clustered}}^{2}roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( over^ start_ARG italic_m italic_u end_ARG start_POSTSUBSCRIPT italic_A - italic_B end_POSTSUBSCRIPT ) = italic_ω start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_A , roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_B , roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT

with

ω clustered 2=Var clustered⁢(x A)+Var clustered⁢(x B)−2⁢C⁢o⁢v clustered⁢(x A,x B)superscript subscript 𝜔 clustered 2 subscript Var clustered subscript 𝑥 𝐴 subscript Var clustered subscript 𝑥 𝐵 2 C o subscript v clustered subscript 𝑥 𝐴 subscript 𝑥 𝐵\omega_{\rm clustered}^{2}=\mathrm{Var}{\rm clustered}(x{A})+\mathrm{Var}{% \rm clustered}(x{B})-2\mathrm{Cov}{\rm clustered}(x{A},x_{B})italic_ω start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) + roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) - 2 roman_C roman_o roman_v start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT )

Var clustered⁢(x A)=1 n⁢∑c∑i∑j(x A,i,c−s¯A)⁢(x A,j,c−s¯A)subscript Var clustered subscript 𝑥 𝐴 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑥 𝐴 𝑖 𝑐 subscript¯𝑠 𝐴 subscript 𝑥 𝐴 𝑗 𝑐 subscript¯𝑠 𝐴\mathrm{Var}{\rm clustered}(x{A})=\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}(x_{A,i% ,c}-\bar{s}{A})(x{A,j,c}-\bar{s}_{A})roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT )

Var clustered⁢(x B)=1 n⁢∑c∑i∑j(x B,i,c−s¯B)⁢(x B,j,c−s¯B)subscript Var clustered subscript 𝑥 𝐵 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑥 𝐵 𝑖 𝑐 subscript¯𝑠 𝐵 subscript 𝑥 𝐵 𝑗 𝑐 subscript¯𝑠 𝐵\mathrm{Var}{\rm clustered}(x{B})=\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}(x_{B,i% ,c}-\bar{s}{B})(x{B,j,c}-\bar{s}_{B})roman_Var start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT )

Cov clustered⁢(x A,x B)=1 n⁢∑c∑i∑j(x A,i,c−s¯A)⁢(x B,j,c−s¯B)subscript Cov clustered subscript 𝑥 𝐴 subscript 𝑥 𝐵 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑥 𝐴 𝑖 𝑐 subscript¯𝑠 𝐴 subscript 𝑥 𝐵 𝑗 𝑐 subscript¯𝑠 𝐵\mathrm{Cov}{\rm clustered}(x{A},x_{B})=\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}(% x_{A,i,c}-\bar{s}{A})(x{B,j,c}-\bar{s}_{B})roman_Cov start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) ( italic_x start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT - over¯ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT )

σ A,clustered 2=1 n⁢∑c∑i∑j ϵ A,i,c⁢ϵ A,j,c superscript subscript 𝜎 𝐴 clustered 2 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript italic-ϵ 𝐴 𝑖 𝑐 subscript italic-ϵ 𝐴 𝑗 𝑐\sigma_{A,{\rm clustered}}^{2}=\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}\epsilon_{A,% i,c}\epsilon_{A,j,c}italic_σ start_POSTSUBSCRIPT italic_A , roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_A , italic_i , italic_c end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_A , italic_j , italic_c end_POSTSUBSCRIPT

σ B,clustered 2=1 n⁢∑c∑i∑j ϵ B,i,c⁢ϵ B,j,c superscript subscript 𝜎 𝐵 clustered 2 1 𝑛 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript italic-ϵ 𝐵 𝑖 𝑐 subscript italic-ϵ 𝐵 𝑗 𝑐\sigma_{B,{\rm clustered}}^{2}=\frac{1}{n}\sum_{c}\sum_{i}\sum_{j}\epsilon_{B,% i,c}\epsilon_{B,j,c}italic_σ start_POSTSUBSCRIPT italic_B , roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_B , italic_i , italic_c end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_B , italic_j , italic_c end_POSTSUBSCRIPT

These clustered versions of ω 2 superscript 𝜔 2\omega^{2}italic_ω start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, σ A 2 superscript subscript 𝜎 𝐴 2\sigma_{A}^{2}italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and σ B 2 superscript subscript 𝜎 𝐵 2\sigma_{B}^{2}italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT can be plugged into Equations 9 and 10 without further modification.

In practice, in both the clustered and non-clustered cases, the mean conditional variance and variance of conditional means will need to be estimated from previous data having K≫1 much-greater-than 𝐾 1 K\gg 1 italic_K ≫ 1. For the sake of completeness, we briefly walk through this estimation process.

Let s M,i,c,k subscript 𝑠 𝑀 𝑖 𝑐 𝑘 s_{M,i,c,k}italic_s start_POSTSUBSCRIPT italic_M , italic_i , italic_c , italic_k end_POSTSUBSCRIPT represent the k 𝑘 k italic_k th score on the i 𝑖 i italic_i th question within the c 𝑐 c italic_c th cluster on model M 𝑀 M italic_M and estimate x^M,i,c=1 K⁢∑k=1 K s M,i,c,k subscript^𝑥 𝑀 𝑖 𝑐 1 𝐾 superscript subscript 𝑘 1 𝐾 subscript 𝑠 𝑀 𝑖 𝑐 𝑘\hat{x}{M,i,c}=\frac{1}{K}\sum{k=1}^{K}s_{M,i,c,k}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_M , italic_i , italic_c end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT italic_M , italic_i , italic_c , italic_k end_POSTSUBSCRIPT. This estimate is sufficient to estimate ω^clustered 2 superscript subscript^𝜔 clustered 2\hat{\omega}_{\rm clustered}^{2}over^ start_ARG italic_ω end_ARG start_POSTSUBSCRIPT roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The clustered mean conditional variance on model M 𝑀 M italic_M may then be estimated as

σ^M,clustered 2=1 n⁢(K−1)⁢∑k∑c∑i∑j(s M,i,c,k−x^M,i,c)⁢(s M,j,c,k−x^M,j,c)superscript subscript^𝜎 𝑀 clustered 2 1 𝑛 𝐾 1 subscript 𝑘 subscript 𝑐 subscript 𝑖 subscript 𝑗 subscript 𝑠 𝑀 𝑖 𝑐 𝑘 subscript^𝑥 𝑀 𝑖 𝑐 subscript 𝑠 𝑀 𝑗 𝑐 𝑘 subscript^𝑥 𝑀 𝑗 𝑐\hat{\sigma}{M,{\rm clustered}}^{2}=\frac{1}{n(K-1)}\sum{k}\sum_{c}\sum_{i}% \sum_{j}(s_{M,i,c,k}-\hat{x}{M,i,c})(s{M,j,c,k}-\hat{x}_{M,j,c})over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT italic_M , roman_clustered end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n ( italic_K - 1 ) end_ARG ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_M , italic_i , italic_c , italic_k end_POSTSUBSCRIPT - over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_M , italic_i , italic_c end_POSTSUBSCRIPT ) ( italic_s start_POSTSUBSCRIPT italic_M , italic_j , italic_c , italic_k end_POSTSUBSCRIPT - over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_M , italic_j , italic_c end_POSTSUBSCRIPT )

Note that we divide by K−1 𝐾 1 K-1 italic_K - 1 instead of K 𝐾 K italic_K in order to obtain a consistent variance estimator with small K 𝐾 K italic_K.

If a subsample of questions is being used for variance estimation, we recommend sampling at the cluster level (i.e. drawing clusters in their entirety) in order to capture the intra-cluster variance structure.

Xet Storage Details

Size:
90.6 kB
·
Xet hash:
659948249324efc69a06de4dd73078868d310ee790f6f33bbf7ed61936ff731e

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.